Dec 16, 2015 at 10:31 AM
Need to migrate a table having 43 million records from SQL server to Azure DB. what is the best approach in terms of performance ? Trying to migrate a single table via sql server MW 5.1 is that a correct way of migrating?
Dec 16, 2015 at 8:32 PM

Yes. SQLAzureMW uses BCP behind the scenes which as far as I know the fastest way to upload data. I have uploaded tables with several hundred million rows with no problem. It just takes awhile. One time it took me 48 hours. Yours should not take that long unless you have a slow network. Something else that I would do is raise your Azure SQL DB to S3 or P+ service tier. This will let your data load a lot faster. Once your data is loaded, you can drop down to the service tier that meets your needs.

If you have several tables that are large, then what I do is:

1) Migrate my database (without data -- See Advanced Options) first.
2) Use different machines to migrate different tables at the same time (Data only this time).

If you are curious, check out SQLAzureMW.exe.config. That is where I store the BCP command information. You can tweak if you want. Here is what to look for:
<add key="ChunkSize" value="100000000"/>                           <!-- Number of records to process per BCP iteration (Uploading to Azure) -->
<add key="BCPArgsIn" value="{0} in {1} -E -n -C RAW -b 1000 -a 4096"/>    <!-- BCP in command parameters -->
Good luck!
Dec 21, 2015 at 7:34 AM
Edited Dec 21, 2015 at 10:09 AM
Thanks alot George for your response. Actually I have been trying last 4 days to upload the data but only 4 million records are copied. For .dat file it nearly takes 10 hours to generate and then it is copying for last 3 days. Now it is stopped with an error of duplicate record present (somehow it tried to copy the already inserted row). I have used the config

<add key="ChunkSize" value="100000000"/>
<add key="BCPArgsIn" value="{0} in {1} -E -n -C RAW -b 10000 -a 4096"/>

Shall I provide -b parameter as 100000 and try generating again. Please guide me what is the problem in my scenario and I am sure network is good.
Dec 21, 2015 at 3:40 PM
Hi, out of curiosity, do you have any calculated columns in your table or triggers? Can you share the table schema with me?

One of the things that I have done in the past is just take SQLAzureMW out of the picture. In reality, the only thing that SQLAzureMW does is use BCP to download your table to a .dat file and then uses BCP to upload the data to Azure SQL Db. If you have saved the output from SQLAzureMW when uploading the data, you will find the actual BCP string. You can copy that and then run the BCP command through a command window. Or just create your own BCP upload command.

Anyway, start the BCP command in the command windows. Depending upon your batch size, BCP will output every time it commits x number of records (i.e. 1000). The BCP output would look something like this:

Starting copy...
1000 rows sent to SQL Server. Total sent: 1000
1000 rows sent to SQL Server. Total sent: 2000
1000 rows sent to SQL Server. Total sent: 3000
1000 rows sent to SQL Server. Total sent: 4000
1000 rows sent to SQL Server. Total sent: 5000
1000 rows sent to SQL Server. Total sent: 6000
1000 rows sent to SQL Server. Total sent: 2951000
2951737 rows copied.
Network packet size (bytes): 4096
Clock Time (ms.) Total : 104110 Average : (881.15 rows per sec.)

If BCP dies because of a network error or something, then you will know how many records got uploaded (say it died at record 85368). You would know that the last commit was at 85000 and that you could restart from there. To restart from there, you can just add the /F switch to your BCP command telling BCP the first record you want it to start uploading on. In this case /F 85001. You would continue this process until all records are uploaded. You could also so a select count(*) from table to get the number of rows uploaded.

Now, all of that said, lets take a quick look at the table schema if possible. If you want, contact me offline so that we can keep your information confidential if you want.

Dec 31, 2015 at 7:55 AM
Edited Jan 4 at 3:23 AM
Sorry for the delayed response. It worked after changing the BCP settings and migrated entire table. Thanks for your assistance :)