Building a stateful ETL application with Python












0















I am tasked with building an ETL application that processes time stamped records and I am trying to do so using Python and Postgres. I am at a point where I have a working application, but I want to see if there is a way to speed up the processing. Keep in mind that this data is state-dependent, so transactions later in the process use data that was generated by previous transactions. I have already gone through the process of chunking the data to allow for parallel processing, but the process is still only has fast as the largest chunk and I can't breakdown the chunks any further. Apologies in advance for vagueness but I am looking for some advice on steps for optimizing this application.



The process begins by reading in a single transaction record and searching a reference table for the contents of the input and output of this transaction. The reference table is where state is maintained, so I am always using the latest contents for the input and updating the contents based on the outputs. The outputs of the process are a result of some calculations based on business logic and are written to a Postgres database in chunks.



I understand that I have not provided any code and I am being a bit vague but would really appreciate any advice. Some thoughts I had was in some way incorporating redis as well as eliminating pandas from the python script.










share|improve this question























  • you can check the dask documentation which gives parallel processing capabilities.

    – anky_91
    Jan 2 at 15:02
















0















I am tasked with building an ETL application that processes time stamped records and I am trying to do so using Python and Postgres. I am at a point where I have a working application, but I want to see if there is a way to speed up the processing. Keep in mind that this data is state-dependent, so transactions later in the process use data that was generated by previous transactions. I have already gone through the process of chunking the data to allow for parallel processing, but the process is still only has fast as the largest chunk and I can't breakdown the chunks any further. Apologies in advance for vagueness but I am looking for some advice on steps for optimizing this application.



The process begins by reading in a single transaction record and searching a reference table for the contents of the input and output of this transaction. The reference table is where state is maintained, so I am always using the latest contents for the input and updating the contents based on the outputs. The outputs of the process are a result of some calculations based on business logic and are written to a Postgres database in chunks.



I understand that I have not provided any code and I am being a bit vague but would really appreciate any advice. Some thoughts I had was in some way incorporating redis as well as eliminating pandas from the python script.










share|improve this question























  • you can check the dask documentation which gives parallel processing capabilities.

    – anky_91
    Jan 2 at 15:02














0












0








0








I am tasked with building an ETL application that processes time stamped records and I am trying to do so using Python and Postgres. I am at a point where I have a working application, but I want to see if there is a way to speed up the processing. Keep in mind that this data is state-dependent, so transactions later in the process use data that was generated by previous transactions. I have already gone through the process of chunking the data to allow for parallel processing, but the process is still only has fast as the largest chunk and I can't breakdown the chunks any further. Apologies in advance for vagueness but I am looking for some advice on steps for optimizing this application.



The process begins by reading in a single transaction record and searching a reference table for the contents of the input and output of this transaction. The reference table is where state is maintained, so I am always using the latest contents for the input and updating the contents based on the outputs. The outputs of the process are a result of some calculations based on business logic and are written to a Postgres database in chunks.



I understand that I have not provided any code and I am being a bit vague but would really appreciate any advice. Some thoughts I had was in some way incorporating redis as well as eliminating pandas from the python script.










share|improve this question














I am tasked with building an ETL application that processes time stamped records and I am trying to do so using Python and Postgres. I am at a point where I have a working application, but I want to see if there is a way to speed up the processing. Keep in mind that this data is state-dependent, so transactions later in the process use data that was generated by previous transactions. I have already gone through the process of chunking the data to allow for parallel processing, but the process is still only has fast as the largest chunk and I can't breakdown the chunks any further. Apologies in advance for vagueness but I am looking for some advice on steps for optimizing this application.



The process begins by reading in a single transaction record and searching a reference table for the contents of the input and output of this transaction. The reference table is where state is maintained, so I am always using the latest contents for the input and updating the contents based on the outputs. The outputs of the process are a result of some calculations based on business logic and are written to a Postgres database in chunks.



I understand that I have not provided any code and I am being a bit vague but would really appreciate any advice. Some thoughts I had was in some way incorporating redis as well as eliminating pandas from the python script.







python postgresql pandas redis etl






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Jan 2 at 14:26









trgtrg

33




33













  • you can check the dask documentation which gives parallel processing capabilities.

    – anky_91
    Jan 2 at 15:02



















  • you can check the dask documentation which gives parallel processing capabilities.

    – anky_91
    Jan 2 at 15:02

















you can check the dask documentation which gives parallel processing capabilities.

– anky_91
Jan 2 at 15:02





you can check the dask documentation which gives parallel processing capabilities.

– anky_91
Jan 2 at 15:02












0






active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54008072%2fbuilding-a-stateful-etl-application-with-python%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54008072%2fbuilding-a-stateful-etl-application-with-python%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

MongoDB - Not Authorized To Execute Command

How to fix TextFormField cause rebuild widget in Flutter

Npm cannot find a required file even through it is in the searched directory