This is default featured slide 1 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 2 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 3 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 4 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 5 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

lundi 5 octobre 2015

The World’s Leaking Data

Right now someone is working extremely hard trying to steal customer data.
As a Database Administrator it is YOU who is responsible for safeguarding the data in your database environments.
Make no mistake, it is your Primary Responsibility as a DBA. Everything else, absolutely everything that you do in your role, is secondary to this need.
Within our community I’m becoming increasingly concerned that security has fallen from the forefront of peoples minds. Data Professionals just don’t seem to be giving security the attention that it deserves.
I’m not the only one who has been observing this worrying trend of late, with K. Brian Kelly blogging about the disturbing lack of security themed sessions announced for the upcoming SQL PASS Summit.
The threat to you, your company and your customers is very real. Just take a look at the Infographic below showing data breaches over the last 3 years:


Why Security Must Have Your Attention

As Database Administrators, clearly there is a very real threat to the customer data in our care. There are some rather ambitious people out there who would like very much to get their hands on our SQL Server databases and their data.
Please ensure that you are taking adequate steps and measures to protect yourself from such threats, by giving security the attention and respect it deserves.

Plugging the Leak

I hope that this post encourages you to stop and think about the importance of SQL Server security in your environment.
As a community it is our responsibility to ensure security is a subject that remains at the forefront of our minds.

The Secret to Keeping Your Cool Like an Experienced Production DBA

Your actions as a Database Administrator can make the difference between a disaster being a minor nuisance or a major problem.
Things can and will go wrong in your environments. Even the greatest software contains bugs and hardware fails, often.
When an unforeseen issue arises your reaction can directly influence the end result and not necessarily in a positive way.
You must be able to keep your cool under pressure in order to achieve the most desirable outcome.

You’re Playing at the High Stakes Table

HighStakesDBA
DBA Dawg
Panicking in response to a crisis will delay the time to resolution and increase the cost of the outage to the business. Mistakes are more likely to occur if a knee jerk reaction is taken. In extreme cases, you could lose your job. Your company could go out of business. It happens.
As Data Professionals it is we who are responsible for the data assets within our organisations, with those of you working alone perhaps shouldering the greatest responsibility of us all.
I’m fortunate to have first-hand experience troubleshooting critical production issues and coordinating incident response for some of the largest SQL Server environments in the world. I’m going to share with you what I have learned about keeping your cool in a crisis and what I believe is the most vital component to effective incident response.

Game Time

Any serious professional playing in a high stakes game goes into it with a plan to win. Winging it is out of the question because things are just too important to leave to chance.
It’s no different for a Database Administrator and is why I believe that every DBA must have an Incident Response Plan if you’re serious about playing to win.
When problems arise you don’t want to be spending time figuring out what your next action is. You should already know what your first steps are so that you can begin to implement them right away and at the first sign of trouble.

How Will You Respond to Crisis?

When you’ve got an incident to respond to you’re going to want to at least have some sort of plan to follow. Any semblance of a documented plan is better than having no plan at all.
  • What are you going to do when the call comes in?
  • What is the first thing that you are going to look at on your SQL Server instance?
  • Are there any specific queries that you might want to run?
  • Do you have them stored in a readily accessible place?
As you begin producing your own incident response plans and processes, you’ll likely want to start to build a knowledge base of supporting resources, tools and documentation (maybe using a Wiki). A place where you and your team can get at the information you might need during an incident to respond fast and effectively.
Practice mock incident and disaster scenarios in order to put your plans to the test. This will not only give you confidence in the effectiveness of your plan but you will also likely be able to identify opportunity for further improvement.

Planning for Success

Having an incident response plan produced and ready will:
  • Provide a proven structure for you to work from.
  • Enable you to respond swiftly and with precision.
  • Give you the confidence that you’re ready and able to handle whatever challenges might present themselves.
When it’s all kicking off and your boss is hovering around your desk, having a response plan will ensure you keep your cool, remain focused on your task and can deliver the best possible outcome.

PowerShell – Change SQL Server Login Password

Here’s a quick post detailing a PowerShell script that can be used to change the password for a SQL Server Login.
Regular readers know that I practice the philosophy of Automate Everything and DBA administrative tasks are no exception. I don’t want to be doing the same task by hand twice, if I can help it.
A requirement came up recently where I had the need to change a number of SQL Server Logins on multiple SQL Server instances. That’s just the sort of exciting work that PowerShell can be used for.
Below you’ll find a script that I threw together in order to complete the task. It accepts a list of server/instance names as a text file, and then you enter the SQL Server Login name and password as parameters on the command line. You don’t want to be storing those in a text file right!
# Date:         11/01/14
# Author:       John Sansom
# Description:  PS script to change a SQL Login password for a provided server list.
#           The script accepts an input file of server names.
# Version:  1.0
#
# Example Execution: .\Change_SQLLoginPassword.ps1 .\ServerNameList.txt SQLLogin Password
 
param([String]$serverListPath, [String]$login, [String]$password)
 
#Load the input file into an Object array
$ServerNameList = get-content -path $serverListPath
 
#Load the SQL Server SMO Assemly
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") | Out-Null
 
#Create a new SqlConnection object
$objSQLConnection = New-Object System.Data.SqlClient.SqlConnection
 
#For each server in the array do the following..
foreach($ServerName in $ServerNameList)
{
    Try
    {
        $objSQLConnection.ConnectionString = "Server=$ServerName;Integrated Security=SSPI;"
            Write-Host "Trying to connect to SQL Server instance on $ServerName..." -NoNewline
            $objSQLConnection.Open() | Out-Null
            Write-Host "Success."
        $objSQLConnection.Close()
    }
    Catch
    {
        Write-Host -BackgroundColor Red -ForegroundColor White "Fail"
        $errText $Error[0].ToString()
            if ($errText.Contains("network-related"))
        {Write-Host "Connection Error. Check server name, port, firewall."}
 
        Write-Host $errText
        continue
    }
 
    #Create a new SMO instance for this $ServerName
    $srv = New-Object "Microsoft.SqlServer.Management.Smo.Server" $ServerName
 
    #Find the SQL Server Login and Change the Password
    $SQLUser = $srv.Logins | ? {$_.Name -eq "$login"};
    $SQLUser.ChangePassword($password);
    $SQLUser.PasswordPolicyEnforced = 1;
    $SQLUser.Alter();
    $SQLUser.Refresh();
    Write-Host "Password for Login:'$login' changed sucessfully on server:'$ServerName' "
}

Using sys.dm_os_ring_buffers to Troubleshoot Connectivity Issues

I encountered an interesting question over on the MSDN forums concerning a poster that was reporting experiencing an issue whereby the “sa” account kept being locked out.
In scenarios such as this the SQL Server Error Log is your friend, as it can be configured to record failed login attempts for an instance. Of course pursuing that avenue of exploration would make for a rather dull blog post and so instead we’re going to take a look at using the lesser known DMV sys.dm_os_ring_buffers.
Note that you use this particular DMV at your own peril considering Books Online states that:
“The following SQL Server Operating System–related dynamic management views are Identified for informational purposes only. Not supported. Future compatibility is not guaranteed.”
In other words don’t go making use of this DMV as an integral part of your monitoring solution but rather be mindful of it’s existence as a possible tool for ad-hoc use.
Sounds like just the sort of thing we should look into. The DMV sys.dm_os_ring_buffers maintains approximately 1000 records, before wrapping around and replacing the oldest entries first. It exposes four columns but we’re primarily only interested in one of them, record, which contains XML data. Yippee! We’ll also filter the results to include just the connectivity ring buffer data.
SELECT CAST(record as xml) AS record_data
FROM sys.dm_os_ring_buffers
WHERE ring_buffer_type= 'RING_BUFFER_CONNECTIVITY'
Executing the statement above will produce output similar to the following:
RING_BUFFER_CONNECTIVITY
Clicking an XML fragment hyperlink will open the contents into a more readable format in a new tab in SSMS for you similar to below, showing a basic ring buffer connectivity error record.
<Record id="5577586" type="RING_BUFFER_CONNECTIVITY" time="9300997329">
  <ConnectivityTraceRecord>
    <RecordType>Error</RecordType>
    <RecordSource>Tds</RecordSource>
    <Spid>80</Spid>
    <SniConnectionId>4BBB38FE-B46A-4D6E-917F-5D473464448B</SniConnectionId>
    <OSError>0</OSError>
    <SniConsumerError>18456</SniConsumerError>
    <SniProvider>7</SniProvider>
    <State>8</State>
    <RemoteHost>10.323.12.32</RemoteHost>
    <RemotePort>1234</RemotePort>
    <LocalHost>10.132.321.32</LocalHost>
    <LocalPort>1888</LocalPort>
    <RecordTime>2/18/2014 14:2:58.588</RecordTime>
    <TdsBuffersInformation>
      <TdsInputBufferError>0</TdsInputBufferError>
      <TdsOutputBufferError>0</TdsOutputBufferError>
      <TdsInputBufferBytes>320</TdsInputBufferBytes>
    </TdsBuffersInformation>
    <TdsDisconnectFlags>
      <PhysicalConnectionIsKilled>0</PhysicalConnectionIsKilled>
      <DisconnectDueToReadError>0</DisconnectDueToReadError>
      <NetworkErrorFoundInInputStream>0</NetworkErrorFoundInInputStream>
      <ErrorFoundBeforeLogin>0</ErrorFoundBeforeLogin>
      <SessionIsKilled>0</SessionIsKilled>
      <NormalDisconnect>0</NormalDisconnect>
    </TdsDisconnectFlags>
  </ConnectivityTraceRecord>
  <Stack>
    <frame id="0">0X0000000001D6C34B</frame>
    <frame id="1">0X0000000001D68FDD</frame>
    <frame id="2">0X0000000002A81001</frame>
    <frame id="3">0X0000000001267E98</frame>
    <frame id="4">0X0000000000E215AD</frame>
    <frame id="5">0X0000000000E21492</frame>
    <frame id="6">0X00000000009EBBD8</frame>
    <frame id="7">0X00000000009EB8BA</frame>
    <frame id="8">0X00000000009EB6FF</frame>
    <frame id="9">0X0000000000F08FB6</frame>
    <frame id="10">0X0000000000F09175</frame>
    <frame id="11">0X0000000000F09839</frame>
    <frame id="12">0X0000000000F09502</frame>
    <frame id="13">0X00000000757C37D7</frame>
    <frame id="14">0X00000000757C3894</frame>
    <frame id="15">0X000000007796652D</frame>
  </Stack>
</Record>
As you can see there’s some rather useful information contained within the XML document. Things such as SniConsumerError, State and the RemoteHost responsible for the connection.
The RecordType node is particularly relevant for the troubleshooting case our forum friend had, with a node value of “Error”, indicating a connection error naturally. By identifying a connectivity ring buffer record of this type, we can then look-up the SniConsumerError code to establish precisely what error was encountered.
Other RecordType‘s include LoginTimers (handy for troubleshooting connection timeouts) and ConnectionClose (can be used to identify killed Spids).
Great you say but a busy DBA does not want to have to click through hoards of XML documents in order to find possible information of interest. Let’s get shredding then:
;WITH RingBufferConnectivity as
(   SELECT
        records.record.value('(/Record/@id)[1]', 'int') AS [RecordID],
        records.record.value('(/Record/ConnectivityTraceRecord/RecordType)[1]', 'varchar(max)') AS [RecordType],
        records.record.value('(/Record/ConnectivityTraceRecord/RecordTime)[1]', 'datetime') AS [RecordTime],
        records.record.value('(/Record/ConnectivityTraceRecord/SniConsumerError)[1]', 'int') AS [Error],
        records.record.value('(/Record/ConnectivityTraceRecord/State)[1]', 'int') AS [State],
        records.record.value('(/Record/ConnectivityTraceRecord/Spid)[1]', 'int') AS [Spid],
        records.record.value('(/Record/ConnectivityTraceRecord/RemoteHost)[1]', 'varchar(max)') AS [RemoteHost],
        records.record.value('(/Record/ConnectivityTraceRecord/RemotePort)[1]', 'varchar(max)') AS [RemotePort],
        records.record.value('(/Record/ConnectivityTraceRecord/LocalHost)[1]', 'varchar(max)') AS [LocalHost]
    FROM
    (   SELECT CAST(record as xml) AS record_data
        FROM sys.dm_os_ring_buffers
        WHERE ring_buffer_type= 'RING_BUFFER_CONNECTIVITY'
    ) TabA
    CROSS APPLY record_data.nodes('//Record') AS records (record)
)
SELECT RBC.*, m.text
FROM RingBufferConnectivity RBC
LEFT JOIN sys.messages M ON
    RBC.Error = M.message_id AND M.language_id = 1033
WHERE RBC.RecordType='Error' --Comment Out to see all RecordTypes
ORDER BY RBC.RecordTime DESC
Executing the query above produces a more readable result set such as the one below. You can see that in the query we joined our ring buffer data to the sys.messages catalog view in order to grab the text for the Error id. Splendid. Using this information we track down the precise source responsible for the Error: Login failed.
QueryResults
(Note: Sensitive information has been omitted from results above, RemoteHost etc.)
Take a look at the DMV sys.dm_os_ring_buffers and keep it in mind as a potential troubleshooting aid for the next time you have a connectivity issue to diagnose.

How Does Disabling an Index Affect the Index Usage DMV Counters

A question was posted to #SQLHelp on Twitter asking, if disabling an Index would clear the index usage counters stored in the SQL Server Dynamic Management Views(DMVs)?
Great question! Not one that is straightforward to answer via the Twitter medium though, so I wanted to share some more detail here.
Consider that the action of disabling a SQL Server Index will not clear the usage counters in sys.dm_db_index_usage_stats. SQL Server will preserve the collected data for a disabled index until such time that:
  • the index is either dropped or re-created (CREATE … WITH  DROP_EXISTING) thereby removing the index entry in the DMV.
  • the index is rebuilt, effectively resuming the collection of DMV metrics.
Our Twitter chum will be left with an index they cannot use once it is disabled, so we can assume that they will most likely either decide to DROP it or make use of it once again (REBUILD).
The short answer shared to Twitter then was “No – the DMV metrics will not be cleared”, in the knowledge that the poster would be rendering their existing index unusable and be required to perform a subsequent action that would ultimately influence the answer I had provided, one way or the other.

Keeping it Simple on Twitter

I’m of the opinion that answering SQL Server questions on Twitter requires being conservative with the truth on occasion. It’s our responsibility to share knowledge and advice with care, endeavoring to serve the best interests of the community.
It’s the classic design conundrum of “just because you can” does not mean that “you should”.
An example that illustrates this greater responsibility is a recent question on Twitter that asked, if nested Linked Server calls are possible (ServerA.proc -> ServerB.proc -> Server C.proc) ?
Sure this can be achieved using nested stored procedure calls with linked server references but should you do it? Probably not.
Strictly speaking I was answering on Twitter with a bit of fib, albeit with good intentions and so I wanted to post an accurate proof of concept here for completeness.
We’ll begin our test by creating a simple table with two indexes, one Clustered(implicit on the Primary Key) and one Non-Clustered (explicitly created), and insert a single record.
USE tempdb;
GO
 
-- Create a table for the test case.
IF EXISTS(SELECT NAME FROM sys.sysobjects WHERE name = 'IndexUsageTesting')
 DROP TABLE IndexUsageTesting;
 
CREATE TABLE IndexUsageTesting
(
 ID INT IDENTITY(1,1) NOT NULL PRIMARY KEY,
 SomeData VARCHAR(20) NOT NULL
)
GO
 
--Create a Non-Clustered Index on the table.
CREATE NONCLUSTERED INDEX ncl_SomeData ON IndexUsageTesting(SomeData);
GO
 
--Insert a record into the table
INSERT INTO IndexUsageTesting(SomeData) VALUES ('blah');
GO
Now let’s review the contents of the DMV sys.dm_db_index_usage to see what data is recorded for the Non-Clustered Index ncl_SomeData.
--Review the index usage metrics for the Non-Clustered index.
Select
    OBJECT_NAME(A.object_id) as TableName,
    A.index_id,
    B.name as IndexName,
    A.user_scans,
    A.user_seeks,
    A.user_updates,
    A.user_lookups
from sys.dm_db_index_usage_stats A
    inner join sys.indexes B on
        A.index_id = B.index_id and A.object_id = B.object_id
where A.database_id = DB_ID(N'tempdb')
and A.object_id = object_id(N'dbo.IndexUsageTesting')
and B.name = 'ncl_SomeData'
ORDER BY A.index_id;
GO
Executing the code above returns the output below, that shows the index ncl_SomeData has been updated once as expected (the result of the INSERT statement).
resultSet1
Just for kicks, let’s now UPDATE the table record to see how it affects the DMV data for the index.
--Update SomeData
UPDATE IndexUsageTesting SET SomeData = 'blah blah blah' where SomeData = (SELECT SomeData FROM IndexUsageTesting)
GO
 
--Review how this affects the metrics
Select
    OBJECT_NAME(A.object_id) as TableName,
    A.index_id,
    B.name as IndexName,
    A.user_scans,
    A.user_seeks,
    A.user_updates,
    A.user_lookups
from sys.dm_db_index_usage_stats A
    inner join sys.indexes B on
        A.index_id = B.index_id and A.object_id = B.object_id
where A.database_id = DB_ID(N'tempdb')
and A.object_id = object_id(N'dbo.IndexUsageTesting')
and B.name = 'ncl_SomeData'
ORDER BY A.index_id;
GO
As you can see, executing the UPDATE statement modified the DMV entry to now include a scan, a seek and a second update entry.
resultSet2
Let’s now go ahead and DISABLE the Non-Clustered index and execute the very same UPDATE statement as before.
--Disable the Non-Clustered index
ALTER INDEX ncl_SomeData ON IndexUsageTesting DISABLE;
 
--Update SomeData
UPDATE IndexUsageTesting SET SomeData = 'blah blah blah' where SomeData = (SELECT SomeData FROM IndexUsageTesting);
 
--Review the DMV
Select
    OBJECT_NAME(A.object_id) as TableName,
    A.index_id,
    B.name as IndexName,
    A.user_scans,
    A.user_seeks,
    A.user_updates,
    A.user_lookups
from sys.dm_db_index_usage_stats A
    inner join sys.indexes B on
        A.index_id = B.index_id and A.object_id = B.object_id
where A.database_id = DB_ID(N'tempdb')
and A.object_id = object_id(N'dbo.IndexUsageTesting')
--and B.name = 'ncl_SomeData'
ORDER BY A.index_id;
GO
Inspecting the result set below, we can see that with the Non-Clustered Index being in a disabled state the update operation did not change the DMV entry but the metrics for the index do persist.
Note: I’ve included the DMV data for the Clustered Index below in the result set for completeness. It highlights the fact that the update operation incremented the DMV record but was omitted from display in previous result sets for clarity.
resultSet3
Now let’s proceed to REBUILD the non-clustered index and execute an UPDATE operation.
--Rebuild the non-clustered index
ALTER INDEX ncl_SomeData ON IndexUsageTesting REBUILD;
GO
 
--Update SomeData
UPDATE IndexUsageTesting SET SomeData = 'blah blah blah' where SomeData = (SELECT SomeData FROM IndexUsageTesting);
GO
 
--Review the DMV
Select
    OBJECT_NAME(A.object_id) as TableName,
    A.index_id,
    B.name as IndexName,
    A.user_scans,
    A.user_seeks,
    A.user_updates,
    A.user_lookups
from sys.dm_db_index_usage_stats A
    inner join sys.indexes B on
        A.index_id = B.index_id and A.object_id = B.object_id
where A.database_id = DB_ID(N'tempdb')
and A.object_id = object_id(N'dbo.IndexUsageTesting')
--and B.name = 'ncl_SomeData'
ORDER BY A.index_id;
GO
As you can see, metric collection for the non-clustered index has now resumed, with each value incremented by one as expected.
resultSet4

Closing Thoughts & Your Homework

When you think about it, the persistence of these metrics across REBUILD operations is desirable for practices such as Index Maintenance. I must admit however, that I had initially expected that a DISABLED index that is subsequently re-enabled via a REBUILD operation, would have resulted in the DMV metrics clearing. This walk-through proves otherwise, confirming that disabling a SQL Server Index does not clear the index usage DMV counters. Dependant on what subsequent action is taken on the index however, can result in the data being cleared.
Extra-Credit
What happens if you re-create the ncl index using CREATE…WITH DROP_EXISTING ?
You can download the full source code for this post here: IndexUsageStatsDMVTest.txt
Update 25/06/14: Tests were performed on SQL Server 2008 R2 SP1. Observed behavior is different in SQL Server 2014.

Source  : http://www.johnsansom.com/

Nombre total de pages vues