Limiting data in pd.DataFrame












0















I am trying to implement the following with loading an internal data structure to pandas:



df = pd.DataFrame(self.data, 
nrows=num_rows+500,
skiprows=skip_rows,
header=header_row,
usecols=limit_cols)


However, it doesn't appear to implement any of those (like it does when reading a csv file, other than the data). Is there another method I can use to have more control over the data that I'm ingesting? Or, do I need to rebuild the data before loading it into pandas?



My input data looks like this:



data = [

['ABC', 'es-419', 'US', 'Movie', 'Full Extract', 'PARIAH', '', '', 'EST', 'Features - EST', 'HD', '2017-05-12 00:00:00', 'Open', 'WSP', '10.5000', '', '', '', '', '10.5240/8847-7152-6775-8B59-ADE0-Y', '10.5240/FFE3-D036-A9A4-9E7A-D833-1', '', '', '', '04065', '', '', '2011', '', '', '', '', '', '', '', '', '', '', '', '113811', '', '', '', '', '', '04065', '', 'Spanish (LAS)', 'US', '10', 'USA NATL SALE', '2017-05-11 00:00:00', 'TIER 3', '21', '', '', 'USA NATL SALE-SPANISH LANGUAGE', 'SPAN'],
['ABC', 'es-419', 'US', 'Movie', 'Full Extract', 'PATCH ADAMS', '', '', 'EST', 'Features - EST', 'HD', '2017-05-12 00:00:00', 'Open', 'WSP', '10.5000', '', '', '', '', '10.5240/DD84-FBF4-8F67-D6F3-47FF-1', '10.5240/B091-00D4-8215-39D8-0F33-8', '', '', '', 'U2254', '', '', '1998', '', '', '', '', '', '', '', '', '', '', '', '113811', '', '', '', '', '', 'U2254', '', 'Spanish (LAS)', 'US', '10', 'USA NATL SALE', '2017-05-11 00:00:00', 'TIER 3', '21', '', '', 'USA NATL SALE-SPANISH LANGUAGE', 'SPAN']

]


And so I'm looking to be able to state which rows it should load (or skip) and which columns it should skip (usecols). Is that possible to do with an internal python data structure?










share|improve this question

























  • DataFrame has no such arguments. Did you mean read_table or read_csv?

    – Parfait
    Jan 2 at 22:04













  • @Parfait I have a list of lists that I'm trying to load into pandas. So would read_table work on that?

    – David L
    Jan 2 at 22:05











  • We need a fuller code block and input data. read_table reads from a file or buffer.

    – Parfait
    Jan 2 at 22:07













  • @Parfait thanks -- I've updated the question above.

    – David L
    Jan 2 at 22:18






  • 2





    Since you're not using .csv data, you don;t actually have rows or cols to skip. For skipping rows, you can just slice your list, e.g. to skip 10, call the constructor on self.data[10:], and you can slice into each sublist similarly for skip_cols. If you feed self.data into a numpy array instead of a list of lists, that gives you more control over multidimensional indexing/slicing

    – G. Anderson
    Jan 2 at 22:25


















0















I am trying to implement the following with loading an internal data structure to pandas:



df = pd.DataFrame(self.data, 
nrows=num_rows+500,
skiprows=skip_rows,
header=header_row,
usecols=limit_cols)


However, it doesn't appear to implement any of those (like it does when reading a csv file, other than the data). Is there another method I can use to have more control over the data that I'm ingesting? Or, do I need to rebuild the data before loading it into pandas?



My input data looks like this:



data = [

['ABC', 'es-419', 'US', 'Movie', 'Full Extract', 'PARIAH', '', '', 'EST', 'Features - EST', 'HD', '2017-05-12 00:00:00', 'Open', 'WSP', '10.5000', '', '', '', '', '10.5240/8847-7152-6775-8B59-ADE0-Y', '10.5240/FFE3-D036-A9A4-9E7A-D833-1', '', '', '', '04065', '', '', '2011', '', '', '', '', '', '', '', '', '', '', '', '113811', '', '', '', '', '', '04065', '', 'Spanish (LAS)', 'US', '10', 'USA NATL SALE', '2017-05-11 00:00:00', 'TIER 3', '21', '', '', 'USA NATL SALE-SPANISH LANGUAGE', 'SPAN'],
['ABC', 'es-419', 'US', 'Movie', 'Full Extract', 'PATCH ADAMS', '', '', 'EST', 'Features - EST', 'HD', '2017-05-12 00:00:00', 'Open', 'WSP', '10.5000', '', '', '', '', '10.5240/DD84-FBF4-8F67-D6F3-47FF-1', '10.5240/B091-00D4-8215-39D8-0F33-8', '', '', '', 'U2254', '', '', '1998', '', '', '', '', '', '', '', '', '', '', '', '113811', '', '', '', '', '', 'U2254', '', 'Spanish (LAS)', 'US', '10', 'USA NATL SALE', '2017-05-11 00:00:00', 'TIER 3', '21', '', '', 'USA NATL SALE-SPANISH LANGUAGE', 'SPAN']

]


And so I'm looking to be able to state which rows it should load (or skip) and which columns it should skip (usecols). Is that possible to do with an internal python data structure?










share|improve this question

























  • DataFrame has no such arguments. Did you mean read_table or read_csv?

    – Parfait
    Jan 2 at 22:04













  • @Parfait I have a list of lists that I'm trying to load into pandas. So would read_table work on that?

    – David L
    Jan 2 at 22:05











  • We need a fuller code block and input data. read_table reads from a file or buffer.

    – Parfait
    Jan 2 at 22:07













  • @Parfait thanks -- I've updated the question above.

    – David L
    Jan 2 at 22:18






  • 2





    Since you're not using .csv data, you don;t actually have rows or cols to skip. For skipping rows, you can just slice your list, e.g. to skip 10, call the constructor on self.data[10:], and you can slice into each sublist similarly for skip_cols. If you feed self.data into a numpy array instead of a list of lists, that gives you more control over multidimensional indexing/slicing

    – G. Anderson
    Jan 2 at 22:25
















0












0








0








I am trying to implement the following with loading an internal data structure to pandas:



df = pd.DataFrame(self.data, 
nrows=num_rows+500,
skiprows=skip_rows,
header=header_row,
usecols=limit_cols)


However, it doesn't appear to implement any of those (like it does when reading a csv file, other than the data). Is there another method I can use to have more control over the data that I'm ingesting? Or, do I need to rebuild the data before loading it into pandas?



My input data looks like this:



data = [

['ABC', 'es-419', 'US', 'Movie', 'Full Extract', 'PARIAH', '', '', 'EST', 'Features - EST', 'HD', '2017-05-12 00:00:00', 'Open', 'WSP', '10.5000', '', '', '', '', '10.5240/8847-7152-6775-8B59-ADE0-Y', '10.5240/FFE3-D036-A9A4-9E7A-D833-1', '', '', '', '04065', '', '', '2011', '', '', '', '', '', '', '', '', '', '', '', '113811', '', '', '', '', '', '04065', '', 'Spanish (LAS)', 'US', '10', 'USA NATL SALE', '2017-05-11 00:00:00', 'TIER 3', '21', '', '', 'USA NATL SALE-SPANISH LANGUAGE', 'SPAN'],
['ABC', 'es-419', 'US', 'Movie', 'Full Extract', 'PATCH ADAMS', '', '', 'EST', 'Features - EST', 'HD', '2017-05-12 00:00:00', 'Open', 'WSP', '10.5000', '', '', '', '', '10.5240/DD84-FBF4-8F67-D6F3-47FF-1', '10.5240/B091-00D4-8215-39D8-0F33-8', '', '', '', 'U2254', '', '', '1998', '', '', '', '', '', '', '', '', '', '', '', '113811', '', '', '', '', '', 'U2254', '', 'Spanish (LAS)', 'US', '10', 'USA NATL SALE', '2017-05-11 00:00:00', 'TIER 3', '21', '', '', 'USA NATL SALE-SPANISH LANGUAGE', 'SPAN']

]


And so I'm looking to be able to state which rows it should load (or skip) and which columns it should skip (usecols). Is that possible to do with an internal python data structure?










share|improve this question
















I am trying to implement the following with loading an internal data structure to pandas:



df = pd.DataFrame(self.data, 
nrows=num_rows+500,
skiprows=skip_rows,
header=header_row,
usecols=limit_cols)


However, it doesn't appear to implement any of those (like it does when reading a csv file, other than the data). Is there another method I can use to have more control over the data that I'm ingesting? Or, do I need to rebuild the data before loading it into pandas?



My input data looks like this:



data = [

['ABC', 'es-419', 'US', 'Movie', 'Full Extract', 'PARIAH', '', '', 'EST', 'Features - EST', 'HD', '2017-05-12 00:00:00', 'Open', 'WSP', '10.5000', '', '', '', '', '10.5240/8847-7152-6775-8B59-ADE0-Y', '10.5240/FFE3-D036-A9A4-9E7A-D833-1', '', '', '', '04065', '', '', '2011', '', '', '', '', '', '', '', '', '', '', '', '113811', '', '', '', '', '', '04065', '', 'Spanish (LAS)', 'US', '10', 'USA NATL SALE', '2017-05-11 00:00:00', 'TIER 3', '21', '', '', 'USA NATL SALE-SPANISH LANGUAGE', 'SPAN'],
['ABC', 'es-419', 'US', 'Movie', 'Full Extract', 'PATCH ADAMS', '', '', 'EST', 'Features - EST', 'HD', '2017-05-12 00:00:00', 'Open', 'WSP', '10.5000', '', '', '', '', '10.5240/DD84-FBF4-8F67-D6F3-47FF-1', '10.5240/B091-00D4-8215-39D8-0F33-8', '', '', '', 'U2254', '', '', '1998', '', '', '', '', '', '', '', '', '', '', '', '113811', '', '', '', '', '', 'U2254', '', 'Spanish (LAS)', 'US', '10', 'USA NATL SALE', '2017-05-11 00:00:00', 'TIER 3', '21', '', '', 'USA NATL SALE-SPANISH LANGUAGE', 'SPAN']

]


And so I'm looking to be able to state which rows it should load (or skip) and which columns it should skip (usecols). Is that possible to do with an internal python data structure?







python pandas






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jan 2 at 22:18







David L

















asked Jan 2 at 22:01









David LDavid L

128117




128117













  • DataFrame has no such arguments. Did you mean read_table or read_csv?

    – Parfait
    Jan 2 at 22:04













  • @Parfait I have a list of lists that I'm trying to load into pandas. So would read_table work on that?

    – David L
    Jan 2 at 22:05











  • We need a fuller code block and input data. read_table reads from a file or buffer.

    – Parfait
    Jan 2 at 22:07













  • @Parfait thanks -- I've updated the question above.

    – David L
    Jan 2 at 22:18






  • 2





    Since you're not using .csv data, you don;t actually have rows or cols to skip. For skipping rows, you can just slice your list, e.g. to skip 10, call the constructor on self.data[10:], and you can slice into each sublist similarly for skip_cols. If you feed self.data into a numpy array instead of a list of lists, that gives you more control over multidimensional indexing/slicing

    – G. Anderson
    Jan 2 at 22:25





















  • DataFrame has no such arguments. Did you mean read_table or read_csv?

    – Parfait
    Jan 2 at 22:04













  • @Parfait I have a list of lists that I'm trying to load into pandas. So would read_table work on that?

    – David L
    Jan 2 at 22:05











  • We need a fuller code block and input data. read_table reads from a file or buffer.

    – Parfait
    Jan 2 at 22:07













  • @Parfait thanks -- I've updated the question above.

    – David L
    Jan 2 at 22:18






  • 2





    Since you're not using .csv data, you don;t actually have rows or cols to skip. For skipping rows, you can just slice your list, e.g. to skip 10, call the constructor on self.data[10:], and you can slice into each sublist similarly for skip_cols. If you feed self.data into a numpy array instead of a list of lists, that gives you more control over multidimensional indexing/slicing

    – G. Anderson
    Jan 2 at 22:25



















DataFrame has no such arguments. Did you mean read_table or read_csv?

– Parfait
Jan 2 at 22:04







DataFrame has no such arguments. Did you mean read_table or read_csv?

– Parfait
Jan 2 at 22:04















@Parfait I have a list of lists that I'm trying to load into pandas. So would read_table work on that?

– David L
Jan 2 at 22:05





@Parfait I have a list of lists that I'm trying to load into pandas. So would read_table work on that?

– David L
Jan 2 at 22:05













We need a fuller code block and input data. read_table reads from a file or buffer.

– Parfait
Jan 2 at 22:07







We need a fuller code block and input data. read_table reads from a file or buffer.

– Parfait
Jan 2 at 22:07















@Parfait thanks -- I've updated the question above.

– David L
Jan 2 at 22:18





@Parfait thanks -- I've updated the question above.

– David L
Jan 2 at 22:18




2




2





Since you're not using .csv data, you don;t actually have rows or cols to skip. For skipping rows, you can just slice your list, e.g. to skip 10, call the constructor on self.data[10:], and you can slice into each sublist similarly for skip_cols. If you feed self.data into a numpy array instead of a list of lists, that gives you more control over multidimensional indexing/slicing

– G. Anderson
Jan 2 at 22:25







Since you're not using .csv data, you don;t actually have rows or cols to skip. For skipping rows, you can just slice your list, e.g. to skip 10, call the constructor on self.data[10:], and you can slice into each sublist similarly for skip_cols. If you feed self.data into a numpy array instead of a list of lists, that gives you more control over multidimensional indexing/slicing

– G. Anderson
Jan 2 at 22:25














0






active

oldest

votes












Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54013746%2flimiting-data-in-pd-dataframe%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54013746%2flimiting-data-in-pd-dataframe%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Can a sorcerer learn a 5th-level spell early by creating spell slots using the Font of Magic feature?

Does disintegrating a polymorphed enemy still kill it after the 2018 errata?

A Topological Invariant for $pi_3(U(n))$