How to do big files comparing in Big data platform?












0















Here are some big files coming in a day, not very frequent, 2-3 every single day, and they are converted into JSON format.



The file's content looks like:



[
{
"spa_ref_data": {
"approval_action": "New",
"spa_ref_no": "6500781413",
"begin_date": null,
"end_date": "20191009",
"doc_file_name": "LEN_SPA_6500781413.json",
"LEN_V": "v1",
"version_no": null,
"spa_ref_id": null,
"spa_ref_notes": "MC00020544",
"vend_code": "LEN"
},
"cust_data": [
{
"cust_name": null,
"cust_no": null,
"cust_type": "E",
"state": null,
"country": null
},
{
"cust_name": null,
"cust_no": null,
"cust_type": "C",
"state": null,
"country": null
}
],
"product_data": [
{
"mfg_partno": "40AH0135US",
"std_price": null,
"rebate_amt": "180",
"max_spa_qty": null,
"rebate_type": null,
"min_spa_qty": null,
"min_cust_qty": null,
"max_cust_qty": null,
"begin_date": "20180608",
"end_date": null
},
{
"mfg_partno": "40AJ0135US",
"std_price": null,
"rebate_amt": "210",
"max_spa_qty": null,
"rebate_type": null,
"min_spa_qty": null,
"min_cust_qty": null,
"max_cust_qty": null,
"begin_date": "20180608",
"end_date": null
}
]
},
{
"spa_ref_data": {
"approval_action": "New",
"spa_ref_no": "5309745006",
"begin_date": null,
"end_date": "20190426",
"doc_file_name": "LEN_SPA_5309745006.json",
"LEN_V": "v1",
"version_no": null,
"spa_ref_id": null,
"spa_ref_notes": "MC00020101",
"vend_code": "LEN"
},
"cust_data": [
{
"cust_name": null,
"cust_no": null,
"cust_type": "E",
"state": null,
"country": null
},
{
"cust_name": null,
"cust_no": null,
"cust_type": "C",
"state": null,
"country": null
}
],
"product_data": [
{
"mfg_partno": "10M8S0HU00",
"std_price": null,
"rebate_amt": "698",
"max_spa_qty": null,
"rebate_type": null,
"min_spa_qty": null,
"min_cust_qty": null,
"max_cust_qty": null,
"begin_date": "20180405",
"end_date": null
},
{
"mfg_partno": "20K5S0CM00",
"std_price": null,
"rebate_amt": "1083",
"max_spa_qty": null,
"rebate_type": null,
"min_spa_qty": null,
"min_cust_qty": null,
"max_cust_qty": null,
"begin_date": "20180405",
"end_date": null
}
]
}
]


This is a mock data file.In fact, it is a array with length 30000+.



My target is to compare the coming one with the latest one. And get the changed data.



The leader says I must use the big data techs. And the performance must be good.



We use Apache NIFI and hadoop big data tools to do it.



Is there some advice ?










share|improve this question

























  • are you containerising any of the data are you load them?

    – shainnif
    Nov 20 '18 at 13:54






  • 1





    It's unclear what your expected output is... If the performance "must be good", use a proper document database, not Hadoop

    – cricket_007
    Nov 20 '18 at 14:06











  • Also, how big are your files really? Array with 30000+ elements or even 100000 elements could easily fit into RAM => no special big data tools needed. Of course you can use them, but also consider normal data analysis frameworks (or code yourself if needed ;)

    – Frank
    Nov 22 '18 at 21:50


















0















Here are some big files coming in a day, not very frequent, 2-3 every single day, and they are converted into JSON format.



The file's content looks like:



[
{
"spa_ref_data": {
"approval_action": "New",
"spa_ref_no": "6500781413",
"begin_date": null,
"end_date": "20191009",
"doc_file_name": "LEN_SPA_6500781413.json",
"LEN_V": "v1",
"version_no": null,
"spa_ref_id": null,
"spa_ref_notes": "MC00020544",
"vend_code": "LEN"
},
"cust_data": [
{
"cust_name": null,
"cust_no": null,
"cust_type": "E",
"state": null,
"country": null
},
{
"cust_name": null,
"cust_no": null,
"cust_type": "C",
"state": null,
"country": null
}
],
"product_data": [
{
"mfg_partno": "40AH0135US",
"std_price": null,
"rebate_amt": "180",
"max_spa_qty": null,
"rebate_type": null,
"min_spa_qty": null,
"min_cust_qty": null,
"max_cust_qty": null,
"begin_date": "20180608",
"end_date": null
},
{
"mfg_partno": "40AJ0135US",
"std_price": null,
"rebate_amt": "210",
"max_spa_qty": null,
"rebate_type": null,
"min_spa_qty": null,
"min_cust_qty": null,
"max_cust_qty": null,
"begin_date": "20180608",
"end_date": null
}
]
},
{
"spa_ref_data": {
"approval_action": "New",
"spa_ref_no": "5309745006",
"begin_date": null,
"end_date": "20190426",
"doc_file_name": "LEN_SPA_5309745006.json",
"LEN_V": "v1",
"version_no": null,
"spa_ref_id": null,
"spa_ref_notes": "MC00020101",
"vend_code": "LEN"
},
"cust_data": [
{
"cust_name": null,
"cust_no": null,
"cust_type": "E",
"state": null,
"country": null
},
{
"cust_name": null,
"cust_no": null,
"cust_type": "C",
"state": null,
"country": null
}
],
"product_data": [
{
"mfg_partno": "10M8S0HU00",
"std_price": null,
"rebate_amt": "698",
"max_spa_qty": null,
"rebate_type": null,
"min_spa_qty": null,
"min_cust_qty": null,
"max_cust_qty": null,
"begin_date": "20180405",
"end_date": null
},
{
"mfg_partno": "20K5S0CM00",
"std_price": null,
"rebate_amt": "1083",
"max_spa_qty": null,
"rebate_type": null,
"min_spa_qty": null,
"min_cust_qty": null,
"max_cust_qty": null,
"begin_date": "20180405",
"end_date": null
}
]
}
]


This is a mock data file.In fact, it is a array with length 30000+.



My target is to compare the coming one with the latest one. And get the changed data.



The leader says I must use the big data techs. And the performance must be good.



We use Apache NIFI and hadoop big data tools to do it.



Is there some advice ?










share|improve this question

























  • are you containerising any of the data are you load them?

    – shainnif
    Nov 20 '18 at 13:54






  • 1





    It's unclear what your expected output is... If the performance "must be good", use a proper document database, not Hadoop

    – cricket_007
    Nov 20 '18 at 14:06











  • Also, how big are your files really? Array with 30000+ elements or even 100000 elements could easily fit into RAM => no special big data tools needed. Of course you can use them, but also consider normal data analysis frameworks (or code yourself if needed ;)

    – Frank
    Nov 22 '18 at 21:50
















0












0








0








Here are some big files coming in a day, not very frequent, 2-3 every single day, and they are converted into JSON format.



The file's content looks like:



[
{
"spa_ref_data": {
"approval_action": "New",
"spa_ref_no": "6500781413",
"begin_date": null,
"end_date": "20191009",
"doc_file_name": "LEN_SPA_6500781413.json",
"LEN_V": "v1",
"version_no": null,
"spa_ref_id": null,
"spa_ref_notes": "MC00020544",
"vend_code": "LEN"
},
"cust_data": [
{
"cust_name": null,
"cust_no": null,
"cust_type": "E",
"state": null,
"country": null
},
{
"cust_name": null,
"cust_no": null,
"cust_type": "C",
"state": null,
"country": null
}
],
"product_data": [
{
"mfg_partno": "40AH0135US",
"std_price": null,
"rebate_amt": "180",
"max_spa_qty": null,
"rebate_type": null,
"min_spa_qty": null,
"min_cust_qty": null,
"max_cust_qty": null,
"begin_date": "20180608",
"end_date": null
},
{
"mfg_partno": "40AJ0135US",
"std_price": null,
"rebate_amt": "210",
"max_spa_qty": null,
"rebate_type": null,
"min_spa_qty": null,
"min_cust_qty": null,
"max_cust_qty": null,
"begin_date": "20180608",
"end_date": null
}
]
},
{
"spa_ref_data": {
"approval_action": "New",
"spa_ref_no": "5309745006",
"begin_date": null,
"end_date": "20190426",
"doc_file_name": "LEN_SPA_5309745006.json",
"LEN_V": "v1",
"version_no": null,
"spa_ref_id": null,
"spa_ref_notes": "MC00020101",
"vend_code": "LEN"
},
"cust_data": [
{
"cust_name": null,
"cust_no": null,
"cust_type": "E",
"state": null,
"country": null
},
{
"cust_name": null,
"cust_no": null,
"cust_type": "C",
"state": null,
"country": null
}
],
"product_data": [
{
"mfg_partno": "10M8S0HU00",
"std_price": null,
"rebate_amt": "698",
"max_spa_qty": null,
"rebate_type": null,
"min_spa_qty": null,
"min_cust_qty": null,
"max_cust_qty": null,
"begin_date": "20180405",
"end_date": null
},
{
"mfg_partno": "20K5S0CM00",
"std_price": null,
"rebate_amt": "1083",
"max_spa_qty": null,
"rebate_type": null,
"min_spa_qty": null,
"min_cust_qty": null,
"max_cust_qty": null,
"begin_date": "20180405",
"end_date": null
}
]
}
]


This is a mock data file.In fact, it is a array with length 30000+.



My target is to compare the coming one with the latest one. And get the changed data.



The leader says I must use the big data techs. And the performance must be good.



We use Apache NIFI and hadoop big data tools to do it.



Is there some advice ?










share|improve this question
















Here are some big files coming in a day, not very frequent, 2-3 every single day, and they are converted into JSON format.



The file's content looks like:



[
{
"spa_ref_data": {
"approval_action": "New",
"spa_ref_no": "6500781413",
"begin_date": null,
"end_date": "20191009",
"doc_file_name": "LEN_SPA_6500781413.json",
"LEN_V": "v1",
"version_no": null,
"spa_ref_id": null,
"spa_ref_notes": "MC00020544",
"vend_code": "LEN"
},
"cust_data": [
{
"cust_name": null,
"cust_no": null,
"cust_type": "E",
"state": null,
"country": null
},
{
"cust_name": null,
"cust_no": null,
"cust_type": "C",
"state": null,
"country": null
}
],
"product_data": [
{
"mfg_partno": "40AH0135US",
"std_price": null,
"rebate_amt": "180",
"max_spa_qty": null,
"rebate_type": null,
"min_spa_qty": null,
"min_cust_qty": null,
"max_cust_qty": null,
"begin_date": "20180608",
"end_date": null
},
{
"mfg_partno": "40AJ0135US",
"std_price": null,
"rebate_amt": "210",
"max_spa_qty": null,
"rebate_type": null,
"min_spa_qty": null,
"min_cust_qty": null,
"max_cust_qty": null,
"begin_date": "20180608",
"end_date": null
}
]
},
{
"spa_ref_data": {
"approval_action": "New",
"spa_ref_no": "5309745006",
"begin_date": null,
"end_date": "20190426",
"doc_file_name": "LEN_SPA_5309745006.json",
"LEN_V": "v1",
"version_no": null,
"spa_ref_id": null,
"spa_ref_notes": "MC00020101",
"vend_code": "LEN"
},
"cust_data": [
{
"cust_name": null,
"cust_no": null,
"cust_type": "E",
"state": null,
"country": null
},
{
"cust_name": null,
"cust_no": null,
"cust_type": "C",
"state": null,
"country": null
}
],
"product_data": [
{
"mfg_partno": "10M8S0HU00",
"std_price": null,
"rebate_amt": "698",
"max_spa_qty": null,
"rebate_type": null,
"min_spa_qty": null,
"min_cust_qty": null,
"max_cust_qty": null,
"begin_date": "20180405",
"end_date": null
},
{
"mfg_partno": "20K5S0CM00",
"std_price": null,
"rebate_amt": "1083",
"max_spa_qty": null,
"rebate_type": null,
"min_spa_qty": null,
"min_cust_qty": null,
"max_cust_qty": null,
"begin_date": "20180405",
"end_date": null
}
]
}
]


This is a mock data file.In fact, it is a array with length 30000+.



My target is to compare the coming one with the latest one. And get the changed data.



The leader says I must use the big data techs. And the performance must be good.



We use Apache NIFI and hadoop big data tools to do it.



Is there some advice ?







apache-spark hadoop hive apache-nifi






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 20 '18 at 14:05









cricket_007

80.7k1142110




80.7k1142110










asked Nov 20 '18 at 9:20









epicGeekepicGeek

275




275













  • are you containerising any of the data are you load them?

    – shainnif
    Nov 20 '18 at 13:54






  • 1





    It's unclear what your expected output is... If the performance "must be good", use a proper document database, not Hadoop

    – cricket_007
    Nov 20 '18 at 14:06











  • Also, how big are your files really? Array with 30000+ elements or even 100000 elements could easily fit into RAM => no special big data tools needed. Of course you can use them, but also consider normal data analysis frameworks (or code yourself if needed ;)

    – Frank
    Nov 22 '18 at 21:50





















  • are you containerising any of the data are you load them?

    – shainnif
    Nov 20 '18 at 13:54






  • 1





    It's unclear what your expected output is... If the performance "must be good", use a proper document database, not Hadoop

    – cricket_007
    Nov 20 '18 at 14:06











  • Also, how big are your files really? Array with 30000+ elements or even 100000 elements could easily fit into RAM => no special big data tools needed. Of course you can use them, but also consider normal data analysis frameworks (or code yourself if needed ;)

    – Frank
    Nov 22 '18 at 21:50



















are you containerising any of the data are you load them?

– shainnif
Nov 20 '18 at 13:54





are you containerising any of the data are you load them?

– shainnif
Nov 20 '18 at 13:54




1




1





It's unclear what your expected output is... If the performance "must be good", use a proper document database, not Hadoop

– cricket_007
Nov 20 '18 at 14:06





It's unclear what your expected output is... If the performance "must be good", use a proper document database, not Hadoop

– cricket_007
Nov 20 '18 at 14:06













Also, how big are your files really? Array with 30000+ elements or even 100000 elements could easily fit into RAM => no special big data tools needed. Of course you can use them, but also consider normal data analysis frameworks (or code yourself if needed ;)

– Frank
Nov 22 '18 at 21:50







Also, how big are your files really? Array with 30000+ elements or even 100000 elements could easily fit into RAM => no special big data tools needed. Of course you can use them, but also consider normal data analysis frameworks (or code yourself if needed ;)

– Frank
Nov 22 '18 at 21:50














1 Answer
1






active

oldest

votes


















0














For example, you can use ExecuteScript processor with js scrpit to compare jsons. It works fast. Also you can split your big array json with SplitRecord processor and compare each one by executeScript proccessor. It also works good.






share|improve this answer























    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53389767%2fhow-to-do-big-files-comparing-in-big-data-platform%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    For example, you can use ExecuteScript processor with js scrpit to compare jsons. It works fast. Also you can split your big array json with SplitRecord processor and compare each one by executeScript proccessor. It also works good.






    share|improve this answer




























      0














      For example, you can use ExecuteScript processor with js scrpit to compare jsons. It works fast. Also you can split your big array json with SplitRecord processor and compare each one by executeScript proccessor. It also works good.






      share|improve this answer


























        0












        0








        0







        For example, you can use ExecuteScript processor with js scrpit to compare jsons. It works fast. Also you can split your big array json with SplitRecord processor and compare each one by executeScript proccessor. It also works good.






        share|improve this answer













        For example, you can use ExecuteScript processor with js scrpit to compare jsons. It works fast. Also you can split your big array json with SplitRecord processor and compare each one by executeScript proccessor. It also works good.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Dec 4 '18 at 7:26









        HereAndBeyondHereAndBeyond

        4116




        4116






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53389767%2fhow-to-do-big-files-comparing-in-big-data-platform%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Can a sorcerer learn a 5th-level spell early by creating spell slots using the Font of Magic feature?

            Does disintegrating a polymorphed enemy still kill it after the 2018 errata?

            A Topological Invariant for $pi_3(U(n))$