Failure inserting into the read-only column

At times when transferring data from one database to other using either Import Export wizard or using SQL Server Integration Services, you may encounter the following error:

Failure inserting into the read-only column .(SQL Server Import and Export Wizard)

This happens when the source table has an identity column which is auto-generated at source and also at the destination. Since it’s auto-generated by system, import/export can’t overwrite identity data and treats it as read-only column. The best way to resolve this issue is to enable IDENTITY INSERT before you Import/Export data from tables.

--Execute this on the destination table BEFORE executing Import/Export.
 
SET IDENTITY_INSERT <TABLE_NAME> ON;

Once the data is imported/exported, you may turn disable IDENTITY INSERT using the below statement

--Execute this on the destination table AFTER executing Import/Export.
 
SET IDENTITY_INSERT <TABLE_NAME> OFF;

Alternatively, you can also fix this by enabling identity insert from the IMPORT/EXPORT GUI as shown below:

From the IMPORT/EXPORT wizard, select the table and click on Edit Mappings as shown in the below screen

You will now be presented with the column mappings dialog box where you can click Enable Identity Insert checkbox.

Click on OK to close the window and re-run the process and you should now be able to copy identity data in the destination object.

Book Nuts and Bolts of Database Mirroring

Download the free Nuts and Bolts of Database Mirroring book by Saleem Hakani (Microsoft Corporation). Everything you need to know to configure, manage, and maintain database mirroring in your production environment. Bonus: Includes Disaster Recovery template that can be used for taking steps to recover from a disaster. (Author: Saleem Hakani)

Nuts and Bolts of SQL Server Database Mirroring by Saleem Hakani

Avoid Page Splits in SQL Server

Let’s say you have a glass filled with water and you try to put more water in that glass, what happens? Water in the glass will overflow. Exactly the same way, whenever a new row is added to a full index page, SQL Server moves around half of the rows to a new page to make room for the new row. This is known as PAGE SPLIT. Page splits can make room for new records but can be very resource intensive. Page-Splits can also incur fragmentation which may adversely affect I/O operations.

Preventing Page-Splits: In order to avoid PAGE-SPLITS, you must proactively determine the FILL FACTOR value. When an index is created or rebuilt, the fill factor value determines the percentage of space on each leaf level page to be filled with data, therefore reserving a percentage of free space for future growth.

For example: Configuring a fill factor value of 60 means that 40 percent of each leaf-level page will be left empty providing space for index expansion as data is added to the underlying table.

The default fill factor value is always 0 which is mostly good for majority of situations. Fill factor of 0 means that the leaf level page is filled almost to capacity, but some space is left for at least one additional index row. (Note: fill factor of 0 and 100 are similar)

Fill factor value can be specified during CREATE INDEX or ALTER INDEX statement to set the fill factor value for individual indexes or you can directly configure this value at the server level so that any new indexes created will use the default value.

Configuring Fill Factor at the server level (Below example sets the fill factor value to 70 percent meaning you will have 30% of space for future expansion. You must carefully test any fill-factor value in the test environment based on your DML activity before implementing this option in the production environment)

Changing Server Level Fill-factor Value

Use Master
Go
 
Exec SP_Configure 'Show Advanced Options',1
Reconfigure
Go
 
Exec SP_Configure 'Fill Factor', 70
Reco

Above server level change will require you to restart SQL server for the setting to take effect.

Configuring Fill-Factor @ Individual Index Level

You can also configure fill-factor during index creation for individual indexes.

--Create an Item Table
Create Table Item
(
   Col_A Varchar(100),
   Col_b Varchar(200)
)
Go
 
--Creating an Index on Item Table with Fill-Factor Value of 70 
CREATE UNIQUE INDEX SH_Index
  ON Item(Col_A)
  WITH (FillFactor = 70)
Go

Note: Configuring fill factor value of 70 means that 30% of each leaf-level page will be left empty providing space for index expansion as data is added to the underlying table in the future.

Let’s now query sys.indexes to check the FILL-FACTOR value of ITEM table.

Select 
   Object_Name(Object_ID)as 'Object_Name',
   Name as Index_Name,
   Fill_Factor
from Sys.Indexes
   where Object_id=object_id('item')
   and name is not null
Go

Things to keep in mind:

  1. SQL Server will need to be restarted when changing Fill-Factor value at the server level.
  2. Make sure to perform thorough testing before making any fill-factor changes to indexes in production environment

Tracking DML Changes without SQL Server Triggers

SQL Server supports an OUTPUT clause as a part of DML statements that can help you in tracking changes made during any DML operation. The OUTPUT clause can save the result set in a table or table variable.

The functionality is similar to what triggers had with INSERTED and DELETED tables which accesses the rows that have been modified during DML operation.

Let’s take an example of changing the address by reversing it’s original value:

Let’s create an Address table using the below script:

Create Table Address 
(
     ProductID Int, 
     SupplierID Int, 
     Address Varchar(255)
)
Go

Let’s add some data to the Address table using the below script:

Insert into Address Values 
(
   234,
   567,
   '1234 One SQL Way, Microsoft City, U.S.'
)
Go
Insert into Address Values 
(
   345,
   678,
   '1234 One Windows Way, Microsoft City, WA'
)
Go

Let’s now update the address: (Please observe the use of OUTPUT clause)

Declare @Recordchanges table 
(
   change Varchar(255)
)
 
Update Address
   Set Address=reverse(address)
 
--Record the updates into the table variable 
OUTPUT 'Original Value:' + DELETED.Address+' has been changed to: '+ INSERTED.Address+'' into @RecordChanges

Once the record is updated, you can now check the values of both the records before the update and after the updates:

Select * from @RecordChanges
 
--Output Change
Original Value:'1234 One SQL Way, Microsoft City, U.S.' has been changed to: '.S.U ,ytiC tfosorciM ,yaW LQS enO 4321'
Original Value:'1234 One Windows Way, Microsoft City, WA' has been changed to: 'AW ,ytiC tfosorciM ,yaW swodniW enO 4321'

Note: You may also use the above logic to track any changes you do to the data and store it in a table.

Faster SQL Development using T-SQL Snippets

T-SQL Snippets allow you to quickly build T-SQL statements without having to remember the commands or its syntax. This helps in reduced development time and in increased productivity for developers and DBAs. Snippet templates were introduced in SQL Server 2012 and are based on XML with predefined fields and values. When you use T-SQL snippet, fields are highlighted and the user can tab through each fields and change the values as required.

Imagine a series of commands that you always use when creating a Trigger or a Table or a Stored Procedure or even a SELECT statement. You can now reduce the amount of time and code you would write by implementing T-SQL Snippets.

Snippets are categorized for ease of use so that the user can see and use various snippets based on the below categories.

Default SnippetSurround SnippetCustom Snippet
These are code templates for various T-SQL commands that you can quickly insert into your T-SQL code when creating tables, stored procedures, triggers, etc.These are code templates that allow you to implement code construct like Begin End, If, While, etc.Custom Snippets allow you to create your own Snippet that can appear with the Snippet menu when invoked.

Invoking Snippet:

  1. Right click on Context Menu in the query editor
  2. Use CTRL + K X shortcut combination by placing the cursor where you would like to insert the snippet.

Creating a Custom Snippet is a 3 step process:

  1. Create a snippet using XML
  2. Register the snippet in SSMS
  3. Invoke the snippet when using Query editor

Note: By default, all T-SQL Snippets are stored under the following folder and are saved as .snippet files:

C:\Program Files\Microsoft SQL Server\<version>\Tools\Binn\ManagementStudio\SQL\Snippets\1033

Let’s now look at creating a Snippet using XML:

In the below example, we will create a snippet that can be used to write SELECT statement for any table and would allow the use of CASE statement for an equality check on a column.

<[default]
<[default] <[default] span=""> <[default] span="">
<[default] span="">
<[default] span=""><[default] span=""><[default] span="">
<[default] span=""><[default] span=""><[default] span=""><[default] span="">
<[default] span=""><[default] span=""><[default] span="">
<[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span="">
<[default] span=""><[default] span=""><[default] span="">
<[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span="">
<[default] span=""><[default] span=""><[default] span="">
<[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span="">
<[default] span="">        END,
<[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span="">
<[default] span=""><[default] span=""><[default] span="">
<[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span="">
<[default] span="">
<[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span="">
<[default] span="">]]>
<[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span=""><[default] span="">        CASE <[default]
           WHEN '' THEN ''<[default] <[default]
           WHEN '' THEN ''<[default] <[default]
           WHEN '' THEN ''<[default] <[default]
           WHEN '' THEN ''<[default] <[default]
       ELSE 'Value not found'
, <[default] <[default]
From <[default]
Go

Note: You can also right click on the context menu in query editor and select Insert Snippet

Best Practices for Backing up Large Databases

In an ideal world, hard drives and other hardware never fail, software is never defective, users do not make mistakes, and hackers are never successful. However, we live in a less than perfect world and we should plan and prepare to handle adverse events.

In todays topic, I will focus on best practices for backing up large mission critical databases. Performing and maintaining good backups is one of the top priority for any DBA/Developer/Engineer working with databases.

RULE OF THUMB! BACKUP AND RESTORE IS NOT A HIGH AVAILABILITY FEATURE. YOU MUST PERFORM REGULAR BACKUPS OF YOUR DATABASES.

RESTORING a database from a backup is simply a repair feature and not an availability feature. If you are running a mission-critical system and if your database requires high availability, then please look into various H/A features available with SQL Server.

If you are running a large/mission-critical database system than you need your database to be available continuously or for extended periods of time with minimal down-time for maintenance tasks. Therefore, the duration of situations that require databases to be restored must be kept as short as possible.

Additionally, if your databases are large, requiring longer periods of time to perform backup and restore than you MUST look into some of the cool new features that SQL Server offers to increase the speed of backup and restore operations to minimize the effect on users during both backup and restore operations.

Here are some such features that would help you improve database backups:

Use Multiple Backup Devices Simultaneously

If you are performing backups/restore on a large database than use multiple backup devices simultaneously to allow backups to be written to all the devices at the same time. Using multiple backup devices in SQL Server, allows database backups to be written to all devices in parallel. One of the potential bottleneck in backup throughput is the backup device speed. Using multiple backup devices can increase throughput in proportion to the number of devices used. Similarly, the backup can be restored from multiple devices in parallel.

Use Mirrored Media Set

Use a mirrored media set. A total of four mirrors is possible per media set. With the mirrored media set, the backup operation writes to multiple groups of backup devices. Each group of backup devices makes up a single mirror in the mirrored media set. Each single mirror set must use the same quantity and type of physical backup devices, and must all have the same properties.

Use Snapshot Backups (Fastest Backup)

This is the fastest way to perform backups on databases. A snapshot backup is a specialized backup that is created almost instantaneously by using a split-mirror solution obtained from an independent hardware and software vendor. Snapshot backups minimize or eliminate the use of SQL Server resources to accomplish the backup. This is especially useful for moderate to very large databases in which availability is very important. Snapshot backups and restores can be performed sometimes in seconds with very little or zero effect on the server.

Use Low Priority Backup Compression

Backing up databases using the newly introduced backup compression feature, could increase CPU usage and any additional CPU consumed by the compression process can adversely impact concurrent operations. Therefore, when possible create a low priority compressed backup whose CPU usage is limited by Resource Governor to prevent any CPU contention.

Use Full, Differential & Log Backups

If the database recovery model is set to FULL, than use different combination of backups (FULL, DIFFERENTIAL, LOG). This will help you minimize the number of backups that need to be applied to bring the database to the point of failure.

Use File / Filegroup Backups

Use file and file group backups and T-log backups. These allow for only those files that contain the relevant data, instead of the whole database, to be backed up or restored.

Use different disk for Backups

Do not use the same physical disk that holds database files or Log files for backup purposes. Using the same physical disk not only affects the performance, but also may reduce the recover-ability of the plan.