Deploying a Dataflow Pipeline using Python and Apache Beam
I am new to using Apache Beam and Dataflow. I would like to use a data-set as an input for a function that will be deployed in parallel using Dataflow. Here is what I have so far:
import os
import apache_beam as beam
from apache_beam.options.pipeline_options import SetupOptions
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import StandardOptions
from apache_beam.options.pipeline_options import GoogleCloudOptions
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = '[location of json service credentails]'
dataflow_options = ['--project=[PROJECT NAME]',
'--job_name=[JOB NAME]',
'--temp_location=gs://[BUCKET NAME]/temp',
'--staging_location=gs://[BUCKET NAME]/stage']
options = PipelineOptions(dataflow_options)
gcloud_options = options.view_as(GoogleCloudOptions)
options.view_as(StandardOptions).runner = 'dataflow'
with beam.Pipeline(options=options) as p:
new_p = p | beam.io.ReadFromText(file_pattern='[file location].csv',
skip_header_lines=1)
| beam.ParDo([Function Name]())
The CSV file will have 4 columns with n rows. Each row represents an instance and each column represents a parameter of that instance. I would like to slip all of the parameters of an instance into a beam.DoFn so I can run it on multiple machines with the help of dataflow.
How do I get a write the function to take multiple arguments from a PCollection? The function below is how I imagine it would go.
class function_name(beam.DoFn):
def process(self, col_1, col_2, col_3, col_4):
function = function(col_1) + function(col_2) + function(col_3) + function(col_4)
return [function]
python google-cloud-dataflow apache-beam
add a comment |
I am new to using Apache Beam and Dataflow. I would like to use a data-set as an input for a function that will be deployed in parallel using Dataflow. Here is what I have so far:
import os
import apache_beam as beam
from apache_beam.options.pipeline_options import SetupOptions
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import StandardOptions
from apache_beam.options.pipeline_options import GoogleCloudOptions
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = '[location of json service credentails]'
dataflow_options = ['--project=[PROJECT NAME]',
'--job_name=[JOB NAME]',
'--temp_location=gs://[BUCKET NAME]/temp',
'--staging_location=gs://[BUCKET NAME]/stage']
options = PipelineOptions(dataflow_options)
gcloud_options = options.view_as(GoogleCloudOptions)
options.view_as(StandardOptions).runner = 'dataflow'
with beam.Pipeline(options=options) as p:
new_p = p | beam.io.ReadFromText(file_pattern='[file location].csv',
skip_header_lines=1)
| beam.ParDo([Function Name]())
The CSV file will have 4 columns with n rows. Each row represents an instance and each column represents a parameter of that instance. I would like to slip all of the parameters of an instance into a beam.DoFn so I can run it on multiple machines with the help of dataflow.
How do I get a write the function to take multiple arguments from a PCollection? The function below is how I imagine it would go.
class function_name(beam.DoFn):
def process(self, col_1, col_2, col_3, col_4):
function = function(col_1) + function(col_2) + function(col_3) + function(col_4)
return [function]
python google-cloud-dataflow apache-beam
Beam has the concept ofPCollection
consisting ofelement
, in your example the csv file is read line-by-line and each line will be an element that will be mapped implicitly to your callable inside the ParDo step. You don't need multiple arguments in yourprocess
method, you just need a single argument, which in this case will be a string e.g. "col1_value, col2_value, col3_value, col4_value" which you will need to split and process and return as a new single element. If you want to return multiple values, use a tuple, dict or some other collection as your return element.
– Davos
Jan 7 at 10:59
add a comment |
I am new to using Apache Beam and Dataflow. I would like to use a data-set as an input for a function that will be deployed in parallel using Dataflow. Here is what I have so far:
import os
import apache_beam as beam
from apache_beam.options.pipeline_options import SetupOptions
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import StandardOptions
from apache_beam.options.pipeline_options import GoogleCloudOptions
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = '[location of json service credentails]'
dataflow_options = ['--project=[PROJECT NAME]',
'--job_name=[JOB NAME]',
'--temp_location=gs://[BUCKET NAME]/temp',
'--staging_location=gs://[BUCKET NAME]/stage']
options = PipelineOptions(dataflow_options)
gcloud_options = options.view_as(GoogleCloudOptions)
options.view_as(StandardOptions).runner = 'dataflow'
with beam.Pipeline(options=options) as p:
new_p = p | beam.io.ReadFromText(file_pattern='[file location].csv',
skip_header_lines=1)
| beam.ParDo([Function Name]())
The CSV file will have 4 columns with n rows. Each row represents an instance and each column represents a parameter of that instance. I would like to slip all of the parameters of an instance into a beam.DoFn so I can run it on multiple machines with the help of dataflow.
How do I get a write the function to take multiple arguments from a PCollection? The function below is how I imagine it would go.
class function_name(beam.DoFn):
def process(self, col_1, col_2, col_3, col_4):
function = function(col_1) + function(col_2) + function(col_3) + function(col_4)
return [function]
python google-cloud-dataflow apache-beam
I am new to using Apache Beam and Dataflow. I would like to use a data-set as an input for a function that will be deployed in parallel using Dataflow. Here is what I have so far:
import os
import apache_beam as beam
from apache_beam.options.pipeline_options import SetupOptions
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import StandardOptions
from apache_beam.options.pipeline_options import GoogleCloudOptions
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = '[location of json service credentails]'
dataflow_options = ['--project=[PROJECT NAME]',
'--job_name=[JOB NAME]',
'--temp_location=gs://[BUCKET NAME]/temp',
'--staging_location=gs://[BUCKET NAME]/stage']
options = PipelineOptions(dataflow_options)
gcloud_options = options.view_as(GoogleCloudOptions)
options.view_as(StandardOptions).runner = 'dataflow'
with beam.Pipeline(options=options) as p:
new_p = p | beam.io.ReadFromText(file_pattern='[file location].csv',
skip_header_lines=1)
| beam.ParDo([Function Name]())
The CSV file will have 4 columns with n rows. Each row represents an instance and each column represents a parameter of that instance. I would like to slip all of the parameters of an instance into a beam.DoFn so I can run it on multiple machines with the help of dataflow.
How do I get a write the function to take multiple arguments from a PCollection? The function below is how I imagine it would go.
class function_name(beam.DoFn):
def process(self, col_1, col_2, col_3, col_4):
function = function(col_1) + function(col_2) + function(col_3) + function(col_4)
return [function]
python google-cloud-dataflow apache-beam
python google-cloud-dataflow apache-beam
asked Nov 21 '18 at 0:20


mgh5021mgh5021
414
414
Beam has the concept ofPCollection
consisting ofelement
, in your example the csv file is read line-by-line and each line will be an element that will be mapped implicitly to your callable inside the ParDo step. You don't need multiple arguments in yourprocess
method, you just need a single argument, which in this case will be a string e.g. "col1_value, col2_value, col3_value, col4_value" which you will need to split and process and return as a new single element. If you want to return multiple values, use a tuple, dict or some other collection as your return element.
– Davos
Jan 7 at 10:59
add a comment |
Beam has the concept ofPCollection
consisting ofelement
, in your example the csv file is read line-by-line and each line will be an element that will be mapped implicitly to your callable inside the ParDo step. You don't need multiple arguments in yourprocess
method, you just need a single argument, which in this case will be a string e.g. "col1_value, col2_value, col3_value, col4_value" which you will need to split and process and return as a new single element. If you want to return multiple values, use a tuple, dict or some other collection as your return element.
– Davos
Jan 7 at 10:59
Beam has the concept of
PCollection
consisting of element
, in your example the csv file is read line-by-line and each line will be an element that will be mapped implicitly to your callable inside the ParDo step. You don't need multiple arguments in your process
method, you just need a single argument, which in this case will be a string e.g. "col1_value, col2_value, col3_value, col4_value" which you will need to split and process and return as a new single element. If you want to return multiple values, use a tuple, dict or some other collection as your return element.– Davos
Jan 7 at 10:59
Beam has the concept of
PCollection
consisting of element
, in your example the csv file is read line-by-line and each line will be an element that will be mapped implicitly to your callable inside the ParDo step. You don't need multiple arguments in your process
method, you just need a single argument, which in this case will be a string e.g. "col1_value, col2_value, col3_value, col4_value" which you will need to split and process and return as a new single element. If you want to return multiple values, use a tuple, dict or some other collection as your return element.– Davos
Jan 7 at 10:59
add a comment |
1 Answer
1
active
oldest
votes
The materialized return from ReadFromText will be a PCollection where the string is still delimited.
Your ParDo should take an element of String and then do a split which you could yield as Dict of col name and value.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53403574%2fdeploying-a-dataflow-pipeline-using-python-and-apache-beam%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
The materialized return from ReadFromText will be a PCollection where the string is still delimited.
Your ParDo should take an element of String and then do a split which you could yield as Dict of col name and value.
add a comment |
The materialized return from ReadFromText will be a PCollection where the string is still delimited.
Your ParDo should take an element of String and then do a split which you could yield as Dict of col name and value.
add a comment |
The materialized return from ReadFromText will be a PCollection where the string is still delimited.
Your ParDo should take an element of String and then do a split which you could yield as Dict of col name and value.
The materialized return from ReadFromText will be a PCollection where the string is still delimited.
Your ParDo should take an element of String and then do a split which you could yield as Dict of col name and value.
answered Nov 21 '18 at 18:35


Eric SchmidtEric Schmidt
742711
742711
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53403574%2fdeploying-a-dataflow-pipeline-using-python-and-apache-beam%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Beam has the concept of
PCollection
consisting ofelement
, in your example the csv file is read line-by-line and each line will be an element that will be mapped implicitly to your callable inside the ParDo step. You don't need multiple arguments in yourprocess
method, you just need a single argument, which in this case will be a string e.g. "col1_value, col2_value, col3_value, col4_value" which you will need to split and process and return as a new single element. If you want to return multiple values, use a tuple, dict or some other collection as your return element.– Davos
Jan 7 at 10:59