Skip to content Skip to sidebar Skip to footer

Bulk Insert Into Sql Server Table Using Pyodbc: Cannot Find File

I know this kind of question has been asked before but still couldn't find the answer I'm looking for. I'm doing bulk insert of the csv file into the SQL Server table but I am gett

Solution 1:

The BULK INSERT statement is executed on the SQL Server machine, so the file path must be accessible from that machine. You are getting "The system cannot find the path specified" because the path

C:\\Users\\kdalal\\callerx_project\\caller_x\\new_file_name.csv

is a path on your machine, not the SQL Server machine.

Since you are dumping the contents of a dataframe to the CSV file you could simply use df.to_sql to push the contents directly to the SQL Server without an intermediate CSV file. To improve performance you can tell SQLAlchemy to use pyodbc's fast_executemany option as described in the related question

Speeding up pandas.DataFrame.to_sql with fast_executemany of pyODBC

Post a Comment for "Bulk Insert Into Sql Server Table Using Pyodbc: Cannot Find File"