Transfer data from excel worksheet (openpyxl) to database table (dbf)
I have a simple problem of reading an excel worksheet, treat every row containing about 83 columns as unique database record, add it to local datum record and ultimately append and write onto DBF file.
I can extract all the values from excel and add them to the list. But the list is not correct syntax and I don't know how to prepare/convert the list to database record. I am using Openpyxl, dbf and python 3.7.
At the moment I am only testing and trying to prepare the data for Row 3 (hence min_max rows = 3)
I understand that the data should be in the format
(('','','', ... 83 entries),
('','','', ... 83 entries)
)
But I do not know how to convert the list data into record
or, alternatively, how to read in excel data directly into a DF appendable format
tbl_tst.open(mode=dbf.READ_WRITE) # all fields character string
for everyrow in ws_IntMstDBF.iter_rows(min_row = 3, max_row = 3, max_col = ws_IntMstDBF.max_column-1):
datum = #set([83]), will defining datum as () help solve the problem?
for idx, cells in enumerate(everyrow):
if cells.value is None: # for None entries, enter empty string
datum.append("")
continue
datum.append(cells.value) # else enter cell values
tbl_tst.append(datum) # append that record to table !!! list is not record error here
tbl_tst.close()
The error is complaining about using list to append to table, and this should be a record etc. Please guide how I can convert excel rows into appendable DBF table data.
raise TypeError("data to append must be a tuple, dict, record, or template; not a %r" % type(data))
TypeError: data to append must be a tuple, dict, record, or template; not a <class 'list'>
python excel openpyxl dbf
add a comment |
I have a simple problem of reading an excel worksheet, treat every row containing about 83 columns as unique database record, add it to local datum record and ultimately append and write onto DBF file.
I can extract all the values from excel and add them to the list. But the list is not correct syntax and I don't know how to prepare/convert the list to database record. I am using Openpyxl, dbf and python 3.7.
At the moment I am only testing and trying to prepare the data for Row 3 (hence min_max rows = 3)
I understand that the data should be in the format
(('','','', ... 83 entries),
('','','', ... 83 entries)
)
But I do not know how to convert the list data into record
or, alternatively, how to read in excel data directly into a DF appendable format
tbl_tst.open(mode=dbf.READ_WRITE) # all fields character string
for everyrow in ws_IntMstDBF.iter_rows(min_row = 3, max_row = 3, max_col = ws_IntMstDBF.max_column-1):
datum = #set([83]), will defining datum as () help solve the problem?
for idx, cells in enumerate(everyrow):
if cells.value is None: # for None entries, enter empty string
datum.append("")
continue
datum.append(cells.value) # else enter cell values
tbl_tst.append(datum) # append that record to table !!! list is not record error here
tbl_tst.close()
The error is complaining about using list to append to table, and this should be a record etc. Please guide how I can convert excel rows into appendable DBF table data.
raise TypeError("data to append must be a tuple, dict, record, or template; not a %r" % type(data))
TypeError: data to append must be a tuple, dict, record, or template; not a <class 'list'>
python excel openpyxl dbf
add a comment |
I have a simple problem of reading an excel worksheet, treat every row containing about 83 columns as unique database record, add it to local datum record and ultimately append and write onto DBF file.
I can extract all the values from excel and add them to the list. But the list is not correct syntax and I don't know how to prepare/convert the list to database record. I am using Openpyxl, dbf and python 3.7.
At the moment I am only testing and trying to prepare the data for Row 3 (hence min_max rows = 3)
I understand that the data should be in the format
(('','','', ... 83 entries),
('','','', ... 83 entries)
)
But I do not know how to convert the list data into record
or, alternatively, how to read in excel data directly into a DF appendable format
tbl_tst.open(mode=dbf.READ_WRITE) # all fields character string
for everyrow in ws_IntMstDBF.iter_rows(min_row = 3, max_row = 3, max_col = ws_IntMstDBF.max_column-1):
datum = #set([83]), will defining datum as () help solve the problem?
for idx, cells in enumerate(everyrow):
if cells.value is None: # for None entries, enter empty string
datum.append("")
continue
datum.append(cells.value) # else enter cell values
tbl_tst.append(datum) # append that record to table !!! list is not record error here
tbl_tst.close()
The error is complaining about using list to append to table, and this should be a record etc. Please guide how I can convert excel rows into appendable DBF table data.
raise TypeError("data to append must be a tuple, dict, record, or template; not a %r" % type(data))
TypeError: data to append must be a tuple, dict, record, or template; not a <class 'list'>
python excel openpyxl dbf
I have a simple problem of reading an excel worksheet, treat every row containing about 83 columns as unique database record, add it to local datum record and ultimately append and write onto DBF file.
I can extract all the values from excel and add them to the list. But the list is not correct syntax and I don't know how to prepare/convert the list to database record. I am using Openpyxl, dbf and python 3.7.
At the moment I am only testing and trying to prepare the data for Row 3 (hence min_max rows = 3)
I understand that the data should be in the format
(('','','', ... 83 entries),
('','','', ... 83 entries)
)
But I do not know how to convert the list data into record
or, alternatively, how to read in excel data directly into a DF appendable format
tbl_tst.open(mode=dbf.READ_WRITE) # all fields character string
for everyrow in ws_IntMstDBF.iter_rows(min_row = 3, max_row = 3, max_col = ws_IntMstDBF.max_column-1):
datum = #set([83]), will defining datum as () help solve the problem?
for idx, cells in enumerate(everyrow):
if cells.value is None: # for None entries, enter empty string
datum.append("")
continue
datum.append(cells.value) # else enter cell values
tbl_tst.append(datum) # append that record to table !!! list is not record error here
tbl_tst.close()
The error is complaining about using list to append to table, and this should be a record etc. Please guide how I can convert excel rows into appendable DBF table data.
raise TypeError("data to append must be a tuple, dict, record, or template; not a %r" % type(data))
TypeError: data to append must be a tuple, dict, record, or template; not a <class 'list'>
python excel openpyxl dbf
python excel openpyxl dbf
asked Jan 2 at 13:57
MakJMakJ
526
526
add a comment |
add a comment |
3 Answers
3
active
oldest
votes
Change
tbl_tst.append(datum)
to
tbl_tst.append(tuple(datum))
and that will get rid of that error. As long as all your cell data has the appropriate type then the append should work.
Hi Ethan, I used your suggestion and was able to write the DBFs without problem. Added bonus was the fact that pyinstaller exe is fully functional whereas in my solution pysal was giving error in generated exe. I have one pending issue with the exe that relative paths such as ..inputfile etc are working fine from exe folder but absolute paths are not working unless I move the exe file to src folder. Is this a question for new thread?
– MakJ
Jan 3 at 15:47
@MakJ: Yes, please ask a new question for that. Glad it's working for you now!
– Ethan Furman
Jan 3 at 19:28
add a comment |
Thank you for the responses, I went on a bit of tangent since last night while trying different solutions.
One solution that worked for me is as follows:
I made sure that the worksheet data I am using is all strings/Text and converted any null entries to String type and entered empty string. So the following code does this task:
#house keeping
for eachrow in ws_IntMstDBF.iter_rows(min_row=2, max_row=ws_IntMstDBF.max_row, max_col=ws_IntMstDBF.max_column):
for idx, cells in enumerate(eachrow):
if cells.value is None: # change every Null cell type to String and put 0x20 (space)
cells.data_type = 's'
cells.value = " "
After writing the worksheet, I reopened it using panda dataframe and verified if the contents were all string type and there were no "nan" values remaining in the dataframe.
Then I used df2dbf function from "Dani Arribas-Bel", modified it to suit the data I am working with and converted to dbf.
The code which imports dataframe and converts to dbf format is as follows:
abspath = Path(__file__).resolve() # resolve to relative path to absolute
rootpath = abspath.parents[3] # root (my source file is3 sub directories deep
xlspath = rootpath / 'sub-dir1' / 'sub-dir2' / 'sub-dir3' / 'test.xlsx'
# above code is only resolving file location, ignore
pd_Mst_df = pd.read_excel(xlspath)
#print(pd_Mst_df) # for debug
print("... Writing Master DBF file ")
df2dbf(pd_Mst_df, dbfpath) # dbf path is defined similar to pd_Mst path
The function df2dbg uses pysal to write dataframe in dbf format:
I made some modifications to the code to detect the length row length and character types as follows:
import pandas as pd
import pysal as ps
import numpy as np
# code from function df2dbf
else:
type2spec = {int: ('N', 20, 0),
np.int64: ('N', 20, 0),
float: ('N', 36, 15),
np.float64: ('N', 36, 15),
str: ('C', 200, 0)
}
#types = [type(df[i].iloc[0]) for i in df.columns]
types = [type('C') for i in range(0, len(df.columns))] #84)] #df.columns)] #range(0,84)] # i not required, to be removed
specs = [type2spec[t] for t in types]
db = ps.open(dbf_path, 'w')
# code continues from function df2dbf
Pandas dataframe didn't require further modifications as all source data was formatted correctly before being committed to excel file.
I will provide the link to pysal and df2dbf as soon as I find it on stackoverflow.
add a comment |
Check out the Python Pandas library...
To read the data from excel inta a Pandas dataframe, you could use pandas.read_excel
Once the date is read into a Pandas dataframe, you can manipulate it and afterwards write it to a database using pandas.DataFrame.to_sql
See also this explanation for dealing with database io
Some sample code would be more useful than just links.
– Ethan Furman
Jan 2 at 17:39
Sure! What Python database adapter are you using?
– Edmond Sesay
Jan 2 at 22:33
It's not for me. You have provided links, which might be useful, but the point of Stackoverflow is to have answers, not links to other places. So please put in some sample code of how you would usepandas.read_excel
andpandas.DataFrame.to_sql
, and then have the links for reference and further study.
– Ethan Furman
Jan 3 at 2:49
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54007632%2ftransfer-data-from-excel-worksheet-openpyxl-to-database-table-dbf%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
Change
tbl_tst.append(datum)
to
tbl_tst.append(tuple(datum))
and that will get rid of that error. As long as all your cell data has the appropriate type then the append should work.
Hi Ethan, I used your suggestion and was able to write the DBFs without problem. Added bonus was the fact that pyinstaller exe is fully functional whereas in my solution pysal was giving error in generated exe. I have one pending issue with the exe that relative paths such as ..inputfile etc are working fine from exe folder but absolute paths are not working unless I move the exe file to src folder. Is this a question for new thread?
– MakJ
Jan 3 at 15:47
@MakJ: Yes, please ask a new question for that. Glad it's working for you now!
– Ethan Furman
Jan 3 at 19:28
add a comment |
Change
tbl_tst.append(datum)
to
tbl_tst.append(tuple(datum))
and that will get rid of that error. As long as all your cell data has the appropriate type then the append should work.
Hi Ethan, I used your suggestion and was able to write the DBFs without problem. Added bonus was the fact that pyinstaller exe is fully functional whereas in my solution pysal was giving error in generated exe. I have one pending issue with the exe that relative paths such as ..inputfile etc are working fine from exe folder but absolute paths are not working unless I move the exe file to src folder. Is this a question for new thread?
– MakJ
Jan 3 at 15:47
@MakJ: Yes, please ask a new question for that. Glad it's working for you now!
– Ethan Furman
Jan 3 at 19:28
add a comment |
Change
tbl_tst.append(datum)
to
tbl_tst.append(tuple(datum))
and that will get rid of that error. As long as all your cell data has the appropriate type then the append should work.
Change
tbl_tst.append(datum)
to
tbl_tst.append(tuple(datum))
and that will get rid of that error. As long as all your cell data has the appropriate type then the append should work.
answered Jan 2 at 17:38
Ethan FurmanEthan Furman
36.8k1192157
36.8k1192157
Hi Ethan, I used your suggestion and was able to write the DBFs without problem. Added bonus was the fact that pyinstaller exe is fully functional whereas in my solution pysal was giving error in generated exe. I have one pending issue with the exe that relative paths such as ..inputfile etc are working fine from exe folder but absolute paths are not working unless I move the exe file to src folder. Is this a question for new thread?
– MakJ
Jan 3 at 15:47
@MakJ: Yes, please ask a new question for that. Glad it's working for you now!
– Ethan Furman
Jan 3 at 19:28
add a comment |
Hi Ethan, I used your suggestion and was able to write the DBFs without problem. Added bonus was the fact that pyinstaller exe is fully functional whereas in my solution pysal was giving error in generated exe. I have one pending issue with the exe that relative paths such as ..inputfile etc are working fine from exe folder but absolute paths are not working unless I move the exe file to src folder. Is this a question for new thread?
– MakJ
Jan 3 at 15:47
@MakJ: Yes, please ask a new question for that. Glad it's working for you now!
– Ethan Furman
Jan 3 at 19:28
Hi Ethan, I used your suggestion and was able to write the DBFs without problem. Added bonus was the fact that pyinstaller exe is fully functional whereas in my solution pysal was giving error in generated exe. I have one pending issue with the exe that relative paths such as ..inputfile etc are working fine from exe folder but absolute paths are not working unless I move the exe file to src folder. Is this a question for new thread?
– MakJ
Jan 3 at 15:47
Hi Ethan, I used your suggestion and was able to write the DBFs without problem. Added bonus was the fact that pyinstaller exe is fully functional whereas in my solution pysal was giving error in generated exe. I have one pending issue with the exe that relative paths such as ..inputfile etc are working fine from exe folder but absolute paths are not working unless I move the exe file to src folder. Is this a question for new thread?
– MakJ
Jan 3 at 15:47
@MakJ: Yes, please ask a new question for that. Glad it's working for you now!
– Ethan Furman
Jan 3 at 19:28
@MakJ: Yes, please ask a new question for that. Glad it's working for you now!
– Ethan Furman
Jan 3 at 19:28
add a comment |
Thank you for the responses, I went on a bit of tangent since last night while trying different solutions.
One solution that worked for me is as follows:
I made sure that the worksheet data I am using is all strings/Text and converted any null entries to String type and entered empty string. So the following code does this task:
#house keeping
for eachrow in ws_IntMstDBF.iter_rows(min_row=2, max_row=ws_IntMstDBF.max_row, max_col=ws_IntMstDBF.max_column):
for idx, cells in enumerate(eachrow):
if cells.value is None: # change every Null cell type to String and put 0x20 (space)
cells.data_type = 's'
cells.value = " "
After writing the worksheet, I reopened it using panda dataframe and verified if the contents were all string type and there were no "nan" values remaining in the dataframe.
Then I used df2dbf function from "Dani Arribas-Bel", modified it to suit the data I am working with and converted to dbf.
The code which imports dataframe and converts to dbf format is as follows:
abspath = Path(__file__).resolve() # resolve to relative path to absolute
rootpath = abspath.parents[3] # root (my source file is3 sub directories deep
xlspath = rootpath / 'sub-dir1' / 'sub-dir2' / 'sub-dir3' / 'test.xlsx'
# above code is only resolving file location, ignore
pd_Mst_df = pd.read_excel(xlspath)
#print(pd_Mst_df) # for debug
print("... Writing Master DBF file ")
df2dbf(pd_Mst_df, dbfpath) # dbf path is defined similar to pd_Mst path
The function df2dbg uses pysal to write dataframe in dbf format:
I made some modifications to the code to detect the length row length and character types as follows:
import pandas as pd
import pysal as ps
import numpy as np
# code from function df2dbf
else:
type2spec = {int: ('N', 20, 0),
np.int64: ('N', 20, 0),
float: ('N', 36, 15),
np.float64: ('N', 36, 15),
str: ('C', 200, 0)
}
#types = [type(df[i].iloc[0]) for i in df.columns]
types = [type('C') for i in range(0, len(df.columns))] #84)] #df.columns)] #range(0,84)] # i not required, to be removed
specs = [type2spec[t] for t in types]
db = ps.open(dbf_path, 'w')
# code continues from function df2dbf
Pandas dataframe didn't require further modifications as all source data was formatted correctly before being committed to excel file.
I will provide the link to pysal and df2dbf as soon as I find it on stackoverflow.
add a comment |
Thank you for the responses, I went on a bit of tangent since last night while trying different solutions.
One solution that worked for me is as follows:
I made sure that the worksheet data I am using is all strings/Text and converted any null entries to String type and entered empty string. So the following code does this task:
#house keeping
for eachrow in ws_IntMstDBF.iter_rows(min_row=2, max_row=ws_IntMstDBF.max_row, max_col=ws_IntMstDBF.max_column):
for idx, cells in enumerate(eachrow):
if cells.value is None: # change every Null cell type to String and put 0x20 (space)
cells.data_type = 's'
cells.value = " "
After writing the worksheet, I reopened it using panda dataframe and verified if the contents were all string type and there were no "nan" values remaining in the dataframe.
Then I used df2dbf function from "Dani Arribas-Bel", modified it to suit the data I am working with and converted to dbf.
The code which imports dataframe and converts to dbf format is as follows:
abspath = Path(__file__).resolve() # resolve to relative path to absolute
rootpath = abspath.parents[3] # root (my source file is3 sub directories deep
xlspath = rootpath / 'sub-dir1' / 'sub-dir2' / 'sub-dir3' / 'test.xlsx'
# above code is only resolving file location, ignore
pd_Mst_df = pd.read_excel(xlspath)
#print(pd_Mst_df) # for debug
print("... Writing Master DBF file ")
df2dbf(pd_Mst_df, dbfpath) # dbf path is defined similar to pd_Mst path
The function df2dbg uses pysal to write dataframe in dbf format:
I made some modifications to the code to detect the length row length and character types as follows:
import pandas as pd
import pysal as ps
import numpy as np
# code from function df2dbf
else:
type2spec = {int: ('N', 20, 0),
np.int64: ('N', 20, 0),
float: ('N', 36, 15),
np.float64: ('N', 36, 15),
str: ('C', 200, 0)
}
#types = [type(df[i].iloc[0]) for i in df.columns]
types = [type('C') for i in range(0, len(df.columns))] #84)] #df.columns)] #range(0,84)] # i not required, to be removed
specs = [type2spec[t] for t in types]
db = ps.open(dbf_path, 'w')
# code continues from function df2dbf
Pandas dataframe didn't require further modifications as all source data was formatted correctly before being committed to excel file.
I will provide the link to pysal and df2dbf as soon as I find it on stackoverflow.
add a comment |
Thank you for the responses, I went on a bit of tangent since last night while trying different solutions.
One solution that worked for me is as follows:
I made sure that the worksheet data I am using is all strings/Text and converted any null entries to String type and entered empty string. So the following code does this task:
#house keeping
for eachrow in ws_IntMstDBF.iter_rows(min_row=2, max_row=ws_IntMstDBF.max_row, max_col=ws_IntMstDBF.max_column):
for idx, cells in enumerate(eachrow):
if cells.value is None: # change every Null cell type to String and put 0x20 (space)
cells.data_type = 's'
cells.value = " "
After writing the worksheet, I reopened it using panda dataframe and verified if the contents were all string type and there were no "nan" values remaining in the dataframe.
Then I used df2dbf function from "Dani Arribas-Bel", modified it to suit the data I am working with and converted to dbf.
The code which imports dataframe and converts to dbf format is as follows:
abspath = Path(__file__).resolve() # resolve to relative path to absolute
rootpath = abspath.parents[3] # root (my source file is3 sub directories deep
xlspath = rootpath / 'sub-dir1' / 'sub-dir2' / 'sub-dir3' / 'test.xlsx'
# above code is only resolving file location, ignore
pd_Mst_df = pd.read_excel(xlspath)
#print(pd_Mst_df) # for debug
print("... Writing Master DBF file ")
df2dbf(pd_Mst_df, dbfpath) # dbf path is defined similar to pd_Mst path
The function df2dbg uses pysal to write dataframe in dbf format:
I made some modifications to the code to detect the length row length and character types as follows:
import pandas as pd
import pysal as ps
import numpy as np
# code from function df2dbf
else:
type2spec = {int: ('N', 20, 0),
np.int64: ('N', 20, 0),
float: ('N', 36, 15),
np.float64: ('N', 36, 15),
str: ('C', 200, 0)
}
#types = [type(df[i].iloc[0]) for i in df.columns]
types = [type('C') for i in range(0, len(df.columns))] #84)] #df.columns)] #range(0,84)] # i not required, to be removed
specs = [type2spec[t] for t in types]
db = ps.open(dbf_path, 'w')
# code continues from function df2dbf
Pandas dataframe didn't require further modifications as all source data was formatted correctly before being committed to excel file.
I will provide the link to pysal and df2dbf as soon as I find it on stackoverflow.
Thank you for the responses, I went on a bit of tangent since last night while trying different solutions.
One solution that worked for me is as follows:
I made sure that the worksheet data I am using is all strings/Text and converted any null entries to String type and entered empty string. So the following code does this task:
#house keeping
for eachrow in ws_IntMstDBF.iter_rows(min_row=2, max_row=ws_IntMstDBF.max_row, max_col=ws_IntMstDBF.max_column):
for idx, cells in enumerate(eachrow):
if cells.value is None: # change every Null cell type to String and put 0x20 (space)
cells.data_type = 's'
cells.value = " "
After writing the worksheet, I reopened it using panda dataframe and verified if the contents were all string type and there were no "nan" values remaining in the dataframe.
Then I used df2dbf function from "Dani Arribas-Bel", modified it to suit the data I am working with and converted to dbf.
The code which imports dataframe and converts to dbf format is as follows:
abspath = Path(__file__).resolve() # resolve to relative path to absolute
rootpath = abspath.parents[3] # root (my source file is3 sub directories deep
xlspath = rootpath / 'sub-dir1' / 'sub-dir2' / 'sub-dir3' / 'test.xlsx'
# above code is only resolving file location, ignore
pd_Mst_df = pd.read_excel(xlspath)
#print(pd_Mst_df) # for debug
print("... Writing Master DBF file ")
df2dbf(pd_Mst_df, dbfpath) # dbf path is defined similar to pd_Mst path
The function df2dbg uses pysal to write dataframe in dbf format:
I made some modifications to the code to detect the length row length and character types as follows:
import pandas as pd
import pysal as ps
import numpy as np
# code from function df2dbf
else:
type2spec = {int: ('N', 20, 0),
np.int64: ('N', 20, 0),
float: ('N', 36, 15),
np.float64: ('N', 36, 15),
str: ('C', 200, 0)
}
#types = [type(df[i].iloc[0]) for i in df.columns]
types = [type('C') for i in range(0, len(df.columns))] #84)] #df.columns)] #range(0,84)] # i not required, to be removed
specs = [type2spec[t] for t in types]
db = ps.open(dbf_path, 'w')
# code continues from function df2dbf
Pandas dataframe didn't require further modifications as all source data was formatted correctly before being committed to excel file.
I will provide the link to pysal and df2dbf as soon as I find it on stackoverflow.
edited Jan 3 at 19:29
Ethan Furman
36.8k1192157
36.8k1192157
answered Jan 3 at 4:44
MakJMakJ
526
526
add a comment |
add a comment |
Check out the Python Pandas library...
To read the data from excel inta a Pandas dataframe, you could use pandas.read_excel
Once the date is read into a Pandas dataframe, you can manipulate it and afterwards write it to a database using pandas.DataFrame.to_sql
See also this explanation for dealing with database io
Some sample code would be more useful than just links.
– Ethan Furman
Jan 2 at 17:39
Sure! What Python database adapter are you using?
– Edmond Sesay
Jan 2 at 22:33
It's not for me. You have provided links, which might be useful, but the point of Stackoverflow is to have answers, not links to other places. So please put in some sample code of how you would usepandas.read_excel
andpandas.DataFrame.to_sql
, and then have the links for reference and further study.
– Ethan Furman
Jan 3 at 2:49
add a comment |
Check out the Python Pandas library...
To read the data from excel inta a Pandas dataframe, you could use pandas.read_excel
Once the date is read into a Pandas dataframe, you can manipulate it and afterwards write it to a database using pandas.DataFrame.to_sql
See also this explanation for dealing with database io
Some sample code would be more useful than just links.
– Ethan Furman
Jan 2 at 17:39
Sure! What Python database adapter are you using?
– Edmond Sesay
Jan 2 at 22:33
It's not for me. You have provided links, which might be useful, but the point of Stackoverflow is to have answers, not links to other places. So please put in some sample code of how you would usepandas.read_excel
andpandas.DataFrame.to_sql
, and then have the links for reference and further study.
– Ethan Furman
Jan 3 at 2:49
add a comment |
Check out the Python Pandas library...
To read the data from excel inta a Pandas dataframe, you could use pandas.read_excel
Once the date is read into a Pandas dataframe, you can manipulate it and afterwards write it to a database using pandas.DataFrame.to_sql
See also this explanation for dealing with database io
Check out the Python Pandas library...
To read the data from excel inta a Pandas dataframe, you could use pandas.read_excel
Once the date is read into a Pandas dataframe, you can manipulate it and afterwards write it to a database using pandas.DataFrame.to_sql
See also this explanation for dealing with database io
answered Jan 2 at 14:28


Edmond SesayEdmond Sesay
574
574
Some sample code would be more useful than just links.
– Ethan Furman
Jan 2 at 17:39
Sure! What Python database adapter are you using?
– Edmond Sesay
Jan 2 at 22:33
It's not for me. You have provided links, which might be useful, but the point of Stackoverflow is to have answers, not links to other places. So please put in some sample code of how you would usepandas.read_excel
andpandas.DataFrame.to_sql
, and then have the links for reference and further study.
– Ethan Furman
Jan 3 at 2:49
add a comment |
Some sample code would be more useful than just links.
– Ethan Furman
Jan 2 at 17:39
Sure! What Python database adapter are you using?
– Edmond Sesay
Jan 2 at 22:33
It's not for me. You have provided links, which might be useful, but the point of Stackoverflow is to have answers, not links to other places. So please put in some sample code of how you would usepandas.read_excel
andpandas.DataFrame.to_sql
, and then have the links for reference and further study.
– Ethan Furman
Jan 3 at 2:49
Some sample code would be more useful than just links.
– Ethan Furman
Jan 2 at 17:39
Some sample code would be more useful than just links.
– Ethan Furman
Jan 2 at 17:39
Sure! What Python database adapter are you using?
– Edmond Sesay
Jan 2 at 22:33
Sure! What Python database adapter are you using?
– Edmond Sesay
Jan 2 at 22:33
It's not for me. You have provided links, which might be useful, but the point of Stackoverflow is to have answers, not links to other places. So please put in some sample code of how you would use
pandas.read_excel
and pandas.DataFrame.to_sql
, and then have the links for reference and further study.– Ethan Furman
Jan 3 at 2:49
It's not for me. You have provided links, which might be useful, but the point of Stackoverflow is to have answers, not links to other places. So please put in some sample code of how you would use
pandas.read_excel
and pandas.DataFrame.to_sql
, and then have the links for reference and further study.– Ethan Furman
Jan 3 at 2:49
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54007632%2ftransfer-data-from-excel-worksheet-openpyxl-to-database-table-dbf%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown