How to read a flat file without footer row using Informtica
I'm working on a workflow where i have to read a flat file without Footer row. How to handle this row without breaking the file structure using Informatica. Please find the sample data given below.
HEADER, FILE.TXT, 2018-20-11
"COL1","COL2","COL3","COL4"
"Dave","123-456-7890", "Charlotte", "NC"
TRAILER,1
Thanks in advance!!
informatica informatica-powercenter
add a comment |
I'm working on a workflow where i have to read a flat file without Footer row. How to handle this row without breaking the file structure using Informatica. Please find the sample data given below.
HEADER, FILE.TXT, 2018-20-11
"COL1","COL2","COL3","COL4"
"Dave","123-456-7890", "Charlotte", "NC"
TRAILER,1
Thanks in advance!!
informatica informatica-powercenter
After reading the file you can filter out the record where COL1='TRAILER'
– Samik
Nov 21 '18 at 5:24
Sorry, there is a double quote as text qualifier for the data and column names. I have already tried the logic you have mentioned.
– Meet Rohan
Nov 21 '18 at 14:47
But why that would be an issue?
– Samik
Nov 21 '18 at 19:03
Hi Sam, I have applied the logic that you suggested however its not working as expected for some of the mappings. The issue is if the field that we are using in the filter condition has all null values the filter condition is not working as expected. I have tried passing '' value for all the null values still no luck.
– Meet Rohan
Dec 10 '18 at 17:01
You can try the ISNULL function
– Samik
Dec 10 '18 at 17:09
add a comment |
I'm working on a workflow where i have to read a flat file without Footer row. How to handle this row without breaking the file structure using Informatica. Please find the sample data given below.
HEADER, FILE.TXT, 2018-20-11
"COL1","COL2","COL3","COL4"
"Dave","123-456-7890", "Charlotte", "NC"
TRAILER,1
Thanks in advance!!
informatica informatica-powercenter
I'm working on a workflow where i have to read a flat file without Footer row. How to handle this row without breaking the file structure using Informatica. Please find the sample data given below.
HEADER, FILE.TXT, 2018-20-11
"COL1","COL2","COL3","COL4"
"Dave","123-456-7890", "Charlotte", "NC"
TRAILER,1
Thanks in advance!!
informatica informatica-powercenter
informatica informatica-powercenter
edited Nov 21 '18 at 14:44
Meet Rohan
asked Nov 20 '18 at 23:49


Meet RohanMeet Rohan
294
294
After reading the file you can filter out the record where COL1='TRAILER'
– Samik
Nov 21 '18 at 5:24
Sorry, there is a double quote as text qualifier for the data and column names. I have already tried the logic you have mentioned.
– Meet Rohan
Nov 21 '18 at 14:47
But why that would be an issue?
– Samik
Nov 21 '18 at 19:03
Hi Sam, I have applied the logic that you suggested however its not working as expected for some of the mappings. The issue is if the field that we are using in the filter condition has all null values the filter condition is not working as expected. I have tried passing '' value for all the null values still no luck.
– Meet Rohan
Dec 10 '18 at 17:01
You can try the ISNULL function
– Samik
Dec 10 '18 at 17:09
add a comment |
After reading the file you can filter out the record where COL1='TRAILER'
– Samik
Nov 21 '18 at 5:24
Sorry, there is a double quote as text qualifier for the data and column names. I have already tried the logic you have mentioned.
– Meet Rohan
Nov 21 '18 at 14:47
But why that would be an issue?
– Samik
Nov 21 '18 at 19:03
Hi Sam, I have applied the logic that you suggested however its not working as expected for some of the mappings. The issue is if the field that we are using in the filter condition has all null values the filter condition is not working as expected. I have tried passing '' value for all the null values still no luck.
– Meet Rohan
Dec 10 '18 at 17:01
You can try the ISNULL function
– Samik
Dec 10 '18 at 17:09
After reading the file you can filter out the record where COL1='TRAILER'
– Samik
Nov 21 '18 at 5:24
After reading the file you can filter out the record where COL1='TRAILER'
– Samik
Nov 21 '18 at 5:24
Sorry, there is a double quote as text qualifier for the data and column names. I have already tried the logic you have mentioned.
– Meet Rohan
Nov 21 '18 at 14:47
Sorry, there is a double quote as text qualifier for the data and column names. I have already tried the logic you have mentioned.
– Meet Rohan
Nov 21 '18 at 14:47
But why that would be an issue?
– Samik
Nov 21 '18 at 19:03
But why that would be an issue?
– Samik
Nov 21 '18 at 19:03
Hi Sam, I have applied the logic that you suggested however its not working as expected for some of the mappings. The issue is if the field that we are using in the filter condition has all null values the filter condition is not working as expected. I have tried passing '' value for all the null values still no luck.
– Meet Rohan
Dec 10 '18 at 17:01
Hi Sam, I have applied the logic that you suggested however its not working as expected for some of the mappings. The issue is if the field that we are using in the filter condition has all null values the filter condition is not working as expected. I have tried passing '' value for all the null values still no luck.
– Meet Rohan
Dec 10 '18 at 17:01
You can try the ISNULL function
– Samik
Dec 10 '18 at 17:09
You can try the ISNULL function
– Samik
Dec 10 '18 at 17:09
add a comment |
3 Answers
3
active
oldest
votes
Won't pretend I've done it myself but following post outlines removing quotes... Nico is usually spot on so consider the approach he suggests along with the filter Samik mentioned
https://network.informatica.com/thread/9404
add a comment |
You can use a command to read the file. Just need to change the Input Type
Source Qualifier property on a session from File
to Command
and put head -n-1 yourFileName
as the Command
.
That should to the trick.
add a comment |
Not sure if you can avoid reading footer line, but you can eliminate footer line(s).
If the file structure is consistent you can use just Filter transformation to filter out on "TRAILER,1".
If not you can try this approach:
SD > SQ > AGG[+ v_cnt Count(COL1)]
[+ o_dummy=1) ] > JN (Join by v_dummy) > FIL (where o_seq < v_cnt) > TGT
> EXP[+ v_seq=v_seq+1 ] >
[+ o_seq = v_seq ]
[+ o_dummy = 1 ]
After Source Qualifier "split" stream to Aggregator and Expression transformations.
In Aggregator add two ports:
- v_cnt for counting all rows in a file and
- o_dummy port to keep hardcoded value ("1") for joining back with the second stream.
And in Expression transformations add three new ports:
- v_seq for assigning sequential number.
- o_dummy port to keep hardcoded value ("1") and
- o_seq output port that returns v_seq.
In Joiner join these two streams by v_dummy = v_dummy and set ports from aggregator as Master (for performance).
If there are empty rows at the bottom, after footer row, you can add another filter (better after SQ) to filter out empty lines… Actually, you can eliminate as much footer lines as you need by filtering o_seq < v_cnt -N
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53403327%2fhow-to-read-a-flat-file-without-footer-row-using-informtica%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
Won't pretend I've done it myself but following post outlines removing quotes... Nico is usually spot on so consider the approach he suggests along with the filter Samik mentioned
https://network.informatica.com/thread/9404
add a comment |
Won't pretend I've done it myself but following post outlines removing quotes... Nico is usually spot on so consider the approach he suggests along with the filter Samik mentioned
https://network.informatica.com/thread/9404
add a comment |
Won't pretend I've done it myself but following post outlines removing quotes... Nico is usually spot on so consider the approach he suggests along with the filter Samik mentioned
https://network.informatica.com/thread/9404
Won't pretend I've done it myself but following post outlines removing quotes... Nico is usually spot on so consider the approach he suggests along with the filter Samik mentioned
https://network.informatica.com/thread/9404
answered Nov 21 '18 at 20:02
Daniel MachetDaniel Machet
552147
552147
add a comment |
add a comment |
You can use a command to read the file. Just need to change the Input Type
Source Qualifier property on a session from File
to Command
and put head -n-1 yourFileName
as the Command
.
That should to the trick.
add a comment |
You can use a command to read the file. Just need to change the Input Type
Source Qualifier property on a session from File
to Command
and put head -n-1 yourFileName
as the Command
.
That should to the trick.
add a comment |
You can use a command to read the file. Just need to change the Input Type
Source Qualifier property on a session from File
to Command
and put head -n-1 yourFileName
as the Command
.
That should to the trick.
You can use a command to read the file. Just need to change the Input Type
Source Qualifier property on a session from File
to Command
and put head -n-1 yourFileName
as the Command
.
That should to the trick.
answered Nov 23 '18 at 6:46


MaciejgMaciejg
1,8511917
1,8511917
add a comment |
add a comment |
Not sure if you can avoid reading footer line, but you can eliminate footer line(s).
If the file structure is consistent you can use just Filter transformation to filter out on "TRAILER,1".
If not you can try this approach:
SD > SQ > AGG[+ v_cnt Count(COL1)]
[+ o_dummy=1) ] > JN (Join by v_dummy) > FIL (where o_seq < v_cnt) > TGT
> EXP[+ v_seq=v_seq+1 ] >
[+ o_seq = v_seq ]
[+ o_dummy = 1 ]
After Source Qualifier "split" stream to Aggregator and Expression transformations.
In Aggregator add two ports:
- v_cnt for counting all rows in a file and
- o_dummy port to keep hardcoded value ("1") for joining back with the second stream.
And in Expression transformations add three new ports:
- v_seq for assigning sequential number.
- o_dummy port to keep hardcoded value ("1") and
- o_seq output port that returns v_seq.
In Joiner join these two streams by v_dummy = v_dummy and set ports from aggregator as Master (for performance).
If there are empty rows at the bottom, after footer row, you can add another filter (better after SQ) to filter out empty lines… Actually, you can eliminate as much footer lines as you need by filtering o_seq < v_cnt -N
add a comment |
Not sure if you can avoid reading footer line, but you can eliminate footer line(s).
If the file structure is consistent you can use just Filter transformation to filter out on "TRAILER,1".
If not you can try this approach:
SD > SQ > AGG[+ v_cnt Count(COL1)]
[+ o_dummy=1) ] > JN (Join by v_dummy) > FIL (where o_seq < v_cnt) > TGT
> EXP[+ v_seq=v_seq+1 ] >
[+ o_seq = v_seq ]
[+ o_dummy = 1 ]
After Source Qualifier "split" stream to Aggregator and Expression transformations.
In Aggregator add two ports:
- v_cnt for counting all rows in a file and
- o_dummy port to keep hardcoded value ("1") for joining back with the second stream.
And in Expression transformations add three new ports:
- v_seq for assigning sequential number.
- o_dummy port to keep hardcoded value ("1") and
- o_seq output port that returns v_seq.
In Joiner join these two streams by v_dummy = v_dummy and set ports from aggregator as Master (for performance).
If there are empty rows at the bottom, after footer row, you can add another filter (better after SQ) to filter out empty lines… Actually, you can eliminate as much footer lines as you need by filtering o_seq < v_cnt -N
add a comment |
Not sure if you can avoid reading footer line, but you can eliminate footer line(s).
If the file structure is consistent you can use just Filter transformation to filter out on "TRAILER,1".
If not you can try this approach:
SD > SQ > AGG[+ v_cnt Count(COL1)]
[+ o_dummy=1) ] > JN (Join by v_dummy) > FIL (where o_seq < v_cnt) > TGT
> EXP[+ v_seq=v_seq+1 ] >
[+ o_seq = v_seq ]
[+ o_dummy = 1 ]
After Source Qualifier "split" stream to Aggregator and Expression transformations.
In Aggregator add two ports:
- v_cnt for counting all rows in a file and
- o_dummy port to keep hardcoded value ("1") for joining back with the second stream.
And in Expression transformations add three new ports:
- v_seq for assigning sequential number.
- o_dummy port to keep hardcoded value ("1") and
- o_seq output port that returns v_seq.
In Joiner join these two streams by v_dummy = v_dummy and set ports from aggregator as Master (for performance).
If there are empty rows at the bottom, after footer row, you can add another filter (better after SQ) to filter out empty lines… Actually, you can eliminate as much footer lines as you need by filtering o_seq < v_cnt -N
Not sure if you can avoid reading footer line, but you can eliminate footer line(s).
If the file structure is consistent you can use just Filter transformation to filter out on "TRAILER,1".
If not you can try this approach:
SD > SQ > AGG[+ v_cnt Count(COL1)]
[+ o_dummy=1) ] > JN (Join by v_dummy) > FIL (where o_seq < v_cnt) > TGT
> EXP[+ v_seq=v_seq+1 ] >
[+ o_seq = v_seq ]
[+ o_dummy = 1 ]
After Source Qualifier "split" stream to Aggregator and Expression transformations.
In Aggregator add two ports:
- v_cnt for counting all rows in a file and
- o_dummy port to keep hardcoded value ("1") for joining back with the second stream.
And in Expression transformations add three new ports:
- v_seq for assigning sequential number.
- o_dummy port to keep hardcoded value ("1") and
- o_seq output port that returns v_seq.
In Joiner join these two streams by v_dummy = v_dummy and set ports from aggregator as Master (for performance).
If there are empty rows at the bottom, after footer row, you can add another filter (better after SQ) to filter out empty lines… Actually, you can eliminate as much footer lines as you need by filtering o_seq < v_cnt -N
answered Nov 23 '18 at 16:56
IkachoIkacho
113
113
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53403327%2fhow-to-read-a-flat-file-without-footer-row-using-informtica%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
After reading the file you can filter out the record where COL1='TRAILER'
– Samik
Nov 21 '18 at 5:24
Sorry, there is a double quote as text qualifier for the data and column names. I have already tried the logic you have mentioned.
– Meet Rohan
Nov 21 '18 at 14:47
But why that would be an issue?
– Samik
Nov 21 '18 at 19:03
Hi Sam, I have applied the logic that you suggested however its not working as expected for some of the mappings. The issue is if the field that we are using in the filter condition has all null values the filter condition is not working as expected. I have tried passing '' value for all the null values still no luck.
– Meet Rohan
Dec 10 '18 at 17:01
You can try the ISNULL function
– Samik
Dec 10 '18 at 17:09