Using Azure SQLDW Polybase for data ingestion from ADLS Gen 1 using vnet service endpoints












0















I am trying to use Polybase in Azure SQL Data Warehouse (SQLDW) to ingest data (persisted in Parquet format from a Hadoop cluster in a VNET) on Azure Data Lake Store (ADLS) Gen 1. The process is working fine but the throughput I am getting is quite poor i.e. approximately 10MBps. My assumption is that the traffic is going via Internet and not via Azure backbone network.
To address this, I've enabled VNET service end-point as follows:
VNET to ADLS (as per this link)
VNET to Azure SQL Data Warehouse (as per this link)



However, even after doing so, there is no performance gain. My understanding is that after enabling this, the traffic should go through Azure backbone network but there is no difference. Am I missing anything in this whole workflow?










share|improve this question























  • Just to add on that: I've tried this with low (500) and high (2000) DWUs and the throughput is still below expectations.

    – Irfan Elahi
    Nov 22 '18 at 6:27











  • Irfan, is this for the Australian customer with whom I think you've been engaged for the last year? I've asked the customer's MS team to engage, as customer-specific network issues may be at play.

    – Ron Dunn
    Nov 22 '18 at 22:50











  • The issue got sorted out. It appeared that the Parquet data was compressed (using SNAPPY though in Impala it showed uncompressed). After factoring in that, we got around 130+MBps which is relatively better.

    – Irfan Elahi
    Dec 17 '18 at 0:56
















0















I am trying to use Polybase in Azure SQL Data Warehouse (SQLDW) to ingest data (persisted in Parquet format from a Hadoop cluster in a VNET) on Azure Data Lake Store (ADLS) Gen 1. The process is working fine but the throughput I am getting is quite poor i.e. approximately 10MBps. My assumption is that the traffic is going via Internet and not via Azure backbone network.
To address this, I've enabled VNET service end-point as follows:
VNET to ADLS (as per this link)
VNET to Azure SQL Data Warehouse (as per this link)



However, even after doing so, there is no performance gain. My understanding is that after enabling this, the traffic should go through Azure backbone network but there is no difference. Am I missing anything in this whole workflow?










share|improve this question























  • Just to add on that: I've tried this with low (500) and high (2000) DWUs and the throughput is still below expectations.

    – Irfan Elahi
    Nov 22 '18 at 6:27











  • Irfan, is this for the Australian customer with whom I think you've been engaged for the last year? I've asked the customer's MS team to engage, as customer-specific network issues may be at play.

    – Ron Dunn
    Nov 22 '18 at 22:50











  • The issue got sorted out. It appeared that the Parquet data was compressed (using SNAPPY though in Impala it showed uncompressed). After factoring in that, we got around 130+MBps which is relatively better.

    – Irfan Elahi
    Dec 17 '18 at 0:56














0












0








0








I am trying to use Polybase in Azure SQL Data Warehouse (SQLDW) to ingest data (persisted in Parquet format from a Hadoop cluster in a VNET) on Azure Data Lake Store (ADLS) Gen 1. The process is working fine but the throughput I am getting is quite poor i.e. approximately 10MBps. My assumption is that the traffic is going via Internet and not via Azure backbone network.
To address this, I've enabled VNET service end-point as follows:
VNET to ADLS (as per this link)
VNET to Azure SQL Data Warehouse (as per this link)



However, even after doing so, there is no performance gain. My understanding is that after enabling this, the traffic should go through Azure backbone network but there is no difference. Am I missing anything in this whole workflow?










share|improve this question














I am trying to use Polybase in Azure SQL Data Warehouse (SQLDW) to ingest data (persisted in Parquet format from a Hadoop cluster in a VNET) on Azure Data Lake Store (ADLS) Gen 1. The process is working fine but the throughput I am getting is quite poor i.e. approximately 10MBps. My assumption is that the traffic is going via Internet and not via Azure backbone network.
To address this, I've enabled VNET service end-point as follows:
VNET to ADLS (as per this link)
VNET to Azure SQL Data Warehouse (as per this link)



However, even after doing so, there is no performance gain. My understanding is that after enabling this, the traffic should go through Azure backbone network but there is no difference. Am I missing anything in this whole workflow?







azure azure-data-lake azure-sqldw polybase






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 22 '18 at 6:21









Irfan ElahiIrfan Elahi

184128




184128













  • Just to add on that: I've tried this with low (500) and high (2000) DWUs and the throughput is still below expectations.

    – Irfan Elahi
    Nov 22 '18 at 6:27











  • Irfan, is this for the Australian customer with whom I think you've been engaged for the last year? I've asked the customer's MS team to engage, as customer-specific network issues may be at play.

    – Ron Dunn
    Nov 22 '18 at 22:50











  • The issue got sorted out. It appeared that the Parquet data was compressed (using SNAPPY though in Impala it showed uncompressed). After factoring in that, we got around 130+MBps which is relatively better.

    – Irfan Elahi
    Dec 17 '18 at 0:56



















  • Just to add on that: I've tried this with low (500) and high (2000) DWUs and the throughput is still below expectations.

    – Irfan Elahi
    Nov 22 '18 at 6:27











  • Irfan, is this for the Australian customer with whom I think you've been engaged for the last year? I've asked the customer's MS team to engage, as customer-specific network issues may be at play.

    – Ron Dunn
    Nov 22 '18 at 22:50











  • The issue got sorted out. It appeared that the Parquet data was compressed (using SNAPPY though in Impala it showed uncompressed). After factoring in that, we got around 130+MBps which is relatively better.

    – Irfan Elahi
    Dec 17 '18 at 0:56

















Just to add on that: I've tried this with low (500) and high (2000) DWUs and the throughput is still below expectations.

– Irfan Elahi
Nov 22 '18 at 6:27





Just to add on that: I've tried this with low (500) and high (2000) DWUs and the throughput is still below expectations.

– Irfan Elahi
Nov 22 '18 at 6:27













Irfan, is this for the Australian customer with whom I think you've been engaged for the last year? I've asked the customer's MS team to engage, as customer-specific network issues may be at play.

– Ron Dunn
Nov 22 '18 at 22:50





Irfan, is this for the Australian customer with whom I think you've been engaged for the last year? I've asked the customer's MS team to engage, as customer-specific network issues may be at play.

– Ron Dunn
Nov 22 '18 at 22:50













The issue got sorted out. It appeared that the Parquet data was compressed (using SNAPPY though in Impala it showed uncompressed). After factoring in that, we got around 130+MBps which is relatively better.

– Irfan Elahi
Dec 17 '18 at 0:56





The issue got sorted out. It appeared that the Parquet data was compressed (using SNAPPY though in Impala it showed uncompressed). After factoring in that, we got around 130+MBps which is relatively better.

– Irfan Elahi
Dec 17 '18 at 0:56












0






active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53424957%2fusing-azure-sqldw-polybase-for-data-ingestion-from-adls-gen-1-using-vnet-service%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53424957%2fusing-azure-sqldw-polybase-for-data-ingestion-from-adls-gen-1-using-vnet-service%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

MongoDB - Not Authorized To Execute Command

How to fix TextFormField cause rebuild widget in Flutter

in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith