Skip to content

Commit a7ed549

Browse files
Merge pull request #35967 from jovanpop-msft/patch-62
Changing option titles and adding small change for Fabric
2 parents 1dfd780 + 5801155 commit a7ed549

File tree

1 file changed

+23
-20
lines changed

1 file changed

+23
-20
lines changed

docs/t-sql/statements/bulk-insert-transact-sql.md

Lines changed: 23 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Transact-SQL reference for the BULK INSERT statement.
44
author: markingmyname
55
ms.author: maghan
66
ms.reviewer: randolphwest, wiassaf
7-
ms.date: 11/26/2025
7+
ms.date: 12/01/2025
88
ms.service: sql
99
ms.subservice: t-sql
1010
ms.topic: reference
@@ -126,7 +126,7 @@ The `BULK INSERT` statement has different arguments and options in different pla
126126
| --- | --- |
127127
| Data source | Local path, Network path (UNC), or Azure Storage | Azure Storage | Azure Storage, One Lake |
128128
| Source authentication | Windows authentication, SAS | Microsoft Entra ID, SAS token, managed identity | Microsoft Entra ID |
129-
| Unsupported options | `*` wildcards in path | `*` wildcards in path | `DATA_SOURCE`, `FORMATFILE_DATA_SOURCE`, `ERRORFILE`, `ERRORFILE_DATA_SOURCE` |
129+
| Unsupported options | `*` wildcards in path | `*` wildcards in path | `DATAFILETYPE = {'native' | 'widenative'}` |
130130
| Enabled options but without effect | | | `KEEPIDENTITY`, `FIRE_TRIGGERS`, `CHECK_CONSTRAINTS`, `TABLOCK`, `ORDER`, `ROWS_PER_BATCH`, `KILOBYTES_PER_BATCH`, and `BATCHSIZE` aren't applicable. They don't throw a syntax error, but they don't have any effect |
131131

132132
#### *database_name*
@@ -176,7 +176,7 @@ FROM 'https://<data-lake>.blob.core.windows.net/public/curated/covid-19/bing_cov
176176
> [!NOTE]
177177
> Replace `<data-lake>.blob.core.windows.net` with an appropriate URL.
178178
179-
#### CODEPAGE = { 'ACP' | 'OEM' | 'RAW' | '*code_page*' }
179+
#### CODEPAGE
180180

181181
Specifies the code page of the data in the data file. `CODEPAGE` is relevant only if the data contains **char**, **varchar**, or **text** columns with character values greater than `127` or less than `32`. For an example, see [Specify a code page](#d-specify-a-code-page).
182182

@@ -200,7 +200,7 @@ You should specify a collation name for each column in a [format file](../../rel
200200
| `RAW` | No conversion from one code page to another occurs. `RAW` is the fastest option. |
201201
| *code_page* | Specific code page number, for example, 850.<br /><br />Versions before [!INCLUDE [sssql16-md](../../includes/sssql16-md.md)] don't support code page 65001 (UTF-8 encoding). |
202202

203-
#### DATAFILETYPE = { 'char' | 'widechar' | 'native' | 'widenative' }
203+
#### DATAFILETYPE
204204

205205
Specifies that `BULK INSERT` performs the import operation using the specified data-file type value.
206206

@@ -232,7 +232,7 @@ WITH (DATAFILETYPE = 'char', FIRSTROW = 2);
232232

233233
::: moniker-end
234234

235-
#### DATA_SOURCE = '*data_source_name*'
235+
#### DATA_SOURCE
236236

237237
**Applies to:** [!INCLUDE [sssql17-md](../../includes/sssql17-md.md)] and later versions, and Azure SQL Database.
238238

@@ -254,27 +254,27 @@ FROM 'curated/covid-19/bing_covid-19_data/latest/bing_covid-19_data.csv'
254254
WITH (DATA_SOURCE = '<data-lake>', FIRSTROW = 2, LASTROW = 100, FIELDTERMINATOR = ',');
255255
```
256256

257-
#### MAXERRORS = *max_errors*
257+
#### MAXERRORS
258258

259259
Specifies the maximum number of syntax errors allowed in the data before the bulk-import operation is canceled. Each row that can't be imported by the bulk-import operation is ignored and counted as one error. If *max_errors* isn't specified, the default is 10.
260260

261261
The `MAX_ERRORS` option doesn't apply to constraint checks or to converting **money** and **bigint** data types.
262262

263-
#### ERRORFILE = '*error_file_path*'
263+
#### ERRORFILE
264264

265265
Specifies the file used to collect rows that have formatting errors and can't be converted to an OLE DB rowset. These rows are copied into this error file from the data file "as is."
266266

267267
The error file is created when the command is executed. An error occurs if the file already exists. Additionally, a control file with the extension `.ERROR.txt` is created, which references each row in the error file and provides error diagnostics. As soon as the errors have been corrected, the data can be loaded.
268268

269269
Beginning with [!INCLUDE [sssql17-md](../../includes/sssql17-md.md)], the *error_file_path* can be in Azure Blob Storage.
270270

271-
#### ERRORFILE_DATA_SOURCE = '*errorfile_data_source_name*'
271+
#### ERRORFILE_DATA_SOURCE
272272

273273
**Applies to:** [!INCLUDE [sssql17-md](../../includes/sssql17-md.md)] and later versions.
274274

275275
Specifies a named external data source pointing to the Azure Blob Storage location of the error file to keep track of errors found during the import. The external data source must be created using the `TYPE = BLOB_STORAGE` option added in [!INCLUDE [sssql17-md](../../includes/sssql17-md.md)]. For more information, see [CREATE EXTERNAL DATA SOURCE](create-external-data-source-transact-sql.md).
276276

277-
#### FIRSTROW = *first_row*
277+
#### FIRSTROW
278278

279279
Specifies the number of the first row to load. The default is the first row in the specified data file. `FIRSTROW` is 1-based.
280280

@@ -289,18 +289,18 @@ WITH (FIRSTROW = 2);
289289
290290
The `FIRSTROW` attribute isn't intended to skip column headers. The `BULK INSERT` statement doesn't support skipping headers. If you choose to skip rows, the [!INCLUDE [ssDEnoversion](../../includes/ssdenoversion-md.md)] looks only at the field terminators, and doesn't validate the data in the fields of skipped rows.
291291

292-
#### LASTROW = *last_row*
292+
#### LASTROW
293293

294294
Specifies the number of the last row to load. The default is 0, which indicates the last row in the specified data file.
295295

296-
#### FORMATFILE_DATA_SOURCE = '*data_source_name*'
296+
#### FORMATFILE_DATA_SOURCE
297297

298298
**Applies to:** [!INCLUDE [sssql17-md](../../includes/sssql17-md.md)] and later versions.
299299

300300
Specifies a named external data source pointing to the Azure Blob Storage location of the format file to define the schema of imported data. The external data source must be created using the `TYPE = BLOB_STORAGE` option added in [!INCLUDE [sssql17-md](../../includes/sssql17-md.md)]. For more information, see [CREATE EXTERNAL DATA SOURCE](create-external-data-source-transact-sql.md).
301301
::: moniker range="=azuresqldb-current || >=sql-server-2016 || >=sql-server-linux-2017 || =azuresqldb-mi-current"
302302

303-
#### BATCHSIZE = *batch_size*
303+
#### BATCHSIZE
304304

305305
Specifies the number of rows in a batch. Each batch is copied to the server as one transaction. If this fails, [!INCLUDE [ssNoVersion](../../includes/ssnoversion-md.md)] commits or rolls back the transaction for every batch. By default, all data in the specified data file is one batch. For information about performance considerations, see [Performance considerations](#performance-considerations) later in this article.
306306

@@ -333,17 +333,17 @@ For more information, see about keeping identify values see [Keep identity value
333333

334334
Specifies that empty columns should retain a null value during the bulk-import operation, instead of having any default values for the columns inserted. For more information, see [Keep nulls or default values during bulk import](../../relational-databases/import-export/keep-nulls-or-use-default-values-during-bulk-import-sql-server.md).
335335

336-
#### KILOBYTES_PER_BATCH = *kilobytes_per_batch*
336+
#### KILOBYTES_PER_BATCH
337337

338338
Specifies the approximate number of kilobytes (KB) of data per batch as *kilobytes_per_batch*. By default, `KILOBYTES_PER_BATCH` is unknown. For information about performance considerations, see [Performance considerations](#performance-considerations) later in this article.
339339

340-
#### ORDER ( { *column* [ ASC | DESC ] } [ ,... *n* ] )
340+
#### ORDER
341341

342342
Specifies how the data in the data file is sorted. Bulk import performance is improved if the data being imported is sorted according to the clustered index on the table, if any. If the data file is sorted in an order other than the order of a clustered index key, or if there's no clustered index on the table, the `ORDER` clause is ignored. The column names supplied must be valid column names in the destination table. By default, the bulk insert operation assumes the data file is unordered. For optimized bulk import, [!INCLUDE [ssNoVersion](../../includes/ssnoversion-md.md)] also validates that the imported data is sorted.
343343

344344
*n* is a placeholder that indicates that multiple columns can be specified.
345345

346-
#### ROWS_PER_BATCH = *rows_per_batch*
346+
#### ROWS_PER_BATCH
347347

348348
Indicates the approximate number of rows of data in the data file.
349349

@@ -359,7 +359,7 @@ For a columnstore index, the locking behavior is different because it's internal
359359

360360
### Input file format options
361361

362-
#### FORMAT = 'CSV'
362+
#### FORMAT
363363

364364
**Applies to:** [!INCLUDE [sssql17-md](../../includes/sssql17-md.md)] and later versions.
365365

@@ -370,14 +370,17 @@ BULK INSERT Sales.Orders
370370
FROM '\\SystemX\DiskZ\Sales\data\orders.csv'
371371
WITH (FORMAT = 'CSV');
372372
```
373+
::: moniker range="=fabric"
374+
In Fabric Data Warehouse, the `BULK INSERT` statement supports the same formats as the `COPY INTO` statement, so `FORMAT='PARQUET'` is also supported.
375+
::: moniker-end
373376

374-
#### FIELDQUOTE = '*field_quote*'
377+
#### FIELDQUOTE
375378

376379
**Applies to:** [!INCLUDE [sssql17-md](../../includes/sssql17-md.md)] and later versions.
377380

378381
Specifies a character to use as the quote character in the CSV file. If not specified, the quote character (`"`) is used as the quote character, as defined in the [RFC 4180](https://tools.ietf.org/html/rfc4180) standard.
379382

380-
#### FORMATFILE = '*format_file_path*'
383+
#### FORMATFILE
381384

382385
Specifies the full path of a format file. A format file describes the data file that contains stored responses created by using the **bcp** utility on the same table or view. The format file should be used if:
383386

@@ -388,7 +391,7 @@ Specifies the full path of a format file. A format file describes the data file
388391

389392
Beginning with [!INCLUDE [sssql17-md](../../includes/sssql17-md.md)], and in Azure SQL Database, `format_file_path` can be in Azure Blob Storage.
390393

391-
#### FIELDTERMINATOR = '*field_terminator*'
394+
#### FIELDTERMINATOR
392395

393396
Specifies the field terminator to be used for **char** and **widechar** data files. The default field terminator is `\t` (tab character). For more information, see [Specify field and row terminators](../../relational-databases/import-export/specify-field-and-row-terminators-sql-server.md).
394397

@@ -401,7 +404,7 @@ WITH (FIELDTERMINATOR = ',', FIRSTROW = 2);
401404
> [!NOTE]
402405
> Replace `<data-lake>.blob.core.windows.net` with an appropriate URL.
403406
404-
#### ROWTERMINATOR = '*row_terminator*'
407+
#### ROWTERMINATOR
405408

406409
Specifies the row terminator to be used for **char** and **widechar** data files. The default row terminator is `\r\n` (newline character). For more information, see [Specify field and row terminators](../../relational-databases/import-export/specify-field-and-row-terminators-sql-server.md).
407410

0 commit comments

Comments
 (0)