Tuesday, December 25, 2012

Good repository of SQL Server less known books

http://www.sqlservercentral.com/Books

Here is a link to the SQL Server books(some of them are free). These book are not so much known, and not so easy to find. They are designed to cover some specific area of SQL Server functionality.

Friday, September 7, 2012

Isolation level for session

View sys.dm_exec_sessions can be used is You want to see transaction isolation level set for some session.

Thursday, August 16, 2012

Thursday, August 9, 2012

Peak time in network and price of energy

According to this Smart Grid IEEE article peak time in Sacramento is between 4 and 7 PM, in summer, for, probably, residential consumers. Knowing customer load patterns is very useful thing.

Also interesting information is that supplied energy is 10 times more costly in peak time than in non - peak time. And this is not without reason. It is because the whole power system  - from generation to distribution - suffers big problems which are the results of the peak load. So reducing peak load, through several tactics, is the ultimate goal for overloaded networks.This is why they are charging ten times more money for energy used in peak load. They are telling their customer - please, reduce Your load :-)

Monday, August 6, 2012

SME

This is one very important abbreviation and a role hiding under these 3 letters. It means Subject of Matter Specialist. In software development, this is a person whose role is very important, especially in huge companies, with many horizontal layers of software. This person, SME, should know the some specific functionality - what were the requirements for that functionality, how does it work, how should it work, theoretical background of functionality, what customer wants to see in this functionality, what are the competitors solution looks like, which software components are used in this functionality, what are the components interfaces, how these components are set (configured) in order to provide this functionality, what is the flow of data, where are the logs of all components, how they can be read. Someone who can point in a component and said - this component has an error. Person who can orchestrate bug resolving, new features development.
And this person is very valuable in a fast growing company, where documentation can not follow the pace of the growth. 

Tuesday, July 31, 2012

How to become Smart Grid specialist

"Engineering degrees with focus on power systems, plus IT courses, software development and knowledge of the electrical grid, with a masters preferred." - representative from Siemens Smart Grid division said. Source T&D World, Grid Optimization.

Friday, July 27, 2012

Deadbands as a crucial factor for reducing heavy load on application and database servers

Deadbends are very useful things. First of all, they represents some limits which signal has to cross, in order to consider that signal has changed. In SCADA systems which usually provides data to some other systems, majority of changes are happening very near the signal value. If dead bend is not set, we will get lots of data which are very similar, that data would press all components, and their value is smaller than the actions and load which they give to the whole system. As some research on real data shows, such small deadbands as 1% can reduce the number of changes which are enter the system about 50%. It reduces overall pressure to the system. Further small dead band increases of 2% and 3% would reduce the amount of data which are entering some system significantly more.
So, according to situation and project requirements, deadbends are very useful things if they are used properly.
One importan aspect of choosing deadband is what that 1% means - 1% of last value, or 1% of signal range.

Friday, June 29, 2012

Backup compression ratio

One of the main characteristics of some type of data used in Smart Grid solution is that there are lots of them. SCADA and AMI are filling applications, which process data and put them in database, SQL Server.
In order to reduce the size of backup files, SQL Server introduces database backup compression starting from 2008 version.

There are some interesting moments.
Compressed backup is created with following command:

BACKUP DATABASE AdventureWorks
TO DISK = 'D:\DATABASE_BACKUPS\AW.bak' MIRROR TO DISK = 'H:\temp\AW.bak'   
WITH INIT, CHECKSUM, FORMAT, COMPRESSION


Compression level can be found by finding backup set ID for specified backup by issuing following command:

select * from msdb..backupset order by backup_finish_date desc

where You find backup_set_id of the backup You made, and with this backup_set_id, execute following command:

SELECT backup_size/compressed_backup_size FROM msdb..backupset where backup_set_id = backup_set_id(change this backup_set_id with id from previous step)


And You will get backup compression ratio. In my case compression ratio was 4.33

Is backup You are going to restore compressed or not, You can find out by executing following command:

RESTORE HEADERONLY FROM DISK = 'H:\temp\AW.bak'

and look for a column named Compressed. If value is 0, than backup is not compressed, and if value is 1, than backup is compressed.

Monday, June 11, 2012

Transmission Line Inspecting Robot

During my studies, as a my diploma work, I have been working with one robot, Trilobot. The goal of my diploma work was to enable robot to move in unknown environment. So this story, which I am going to share here has some connections with my student work.
It is about a robot which moves using transmission lines and which inspects surrounding of a transmission lines (which are usually very long, more hundreds kilometers). These information are used to help people maintain their network. Project seems very interesting.

Hammerora for SQL Server

Although I've thought that Hammerora is predominantly used for Oracle testing - we used this tool in order to verify Oracle RAC - on Brent Ozar site I have found article where Kendra Little, one of Brent's associates discussed about using Hammerora for SQL Server testing.

I presume that this "ora" word in tool's name reference to Oracle :-)

Useful book describing using Hammerora for Oracle is Pro Oracle Database 11g RAC on Linux

Here is a link to Hammerora home page.

What is interesting in this tool is that You can run TPC-C and TPC-H benchmarks. More about TPC can be found on this link. Once You know Your current TPC-x results, You can compare it with best results shown on TPC site, and by tuning You installation, SQL Server, hardware, You can improve TPC-x performances. Very interesting :-)

Update on February, 2016.
Some other tools for testing IOPS for Oracle are SLOB - The Silly Little Oracle Benchmark and Orion, tool which simulate Oracle IO workload. Link for Orion is in Japanese, but it could be easily understood. 

Friday, June 1, 2012

SQLIO, Current disk queue length

During some tests I have performed on SQL Server on prefetch subject, I had to use SQLIO tool.
In this test, it is started with usual parameters

sqlio -kR -t20 -dG -s600 -b64

and interesting parameter is -t20. This parameter means that sqlio will run 20 threads.
In parallel with sqlio I started performance monitor for parameter Logical disk: Current Disk Queue Length. During sqlio work, Current Disk Queue Length was constantly 20, like number of threads.

What does these results mean? Does it mean that disk can process only one request at the time?

I have found some explanation regarding this parameter: "Current Disk Queue Length is the number of requests outstanding on the disk at the time the performance data is collected. Multi-spindle disk devices can have multiple requests that are active at one time, but other concurrent requests are awaiting service. This counter might reflect a transitory high or low queue length, but if there is a sustained load on the disk drive, it is likely that this will be consistently high. Requests experience delays proportional to the length of this queue minus the number of spindles on the disks. For good performance, this difference should average less than two."

So, this is maybe an answer to my question. I probably have only one spindles. And Current Disk Queue Length is equal to number of requests.

Another interesting thing is difference between disk from the point of sequential and random write.
Here are the commands and results:
Disk A
sqlio -kW  -dC -s10 -b64 -fsequential
throughput metrics:
IOs/sec:  1535.20
MBs/sec:    95.95


sqlio -kW  -dC -s10 -b64 -frandom
throughput metrics:
IOs/sec:   120.60
MBs/sec:     7.53 
 

Disk B 
sqlio -kW  -dH -s10 -b64 -fsequential
throughput metrics:
IOs/sec:   102.90
MBs/sec:     6.43



sqlio -kW  -dH -s10 -b64 -frandom
throughput metrics:
IOs/sec:   126.10
MBs/sec:     7.88


It can be seen that Disk A comparing to Disk B has about 15 times faster sequential write, while random write is similar on both disk. What can be concluded from these figures?

Very interesting and informative link describing SQLIO usage, can be found here

Thursday, May 31, 2012

Delete cache

When You need to delete data from cache, it can be done with following command:


DBCC DROPCLEANBUFFERS
 
On link more info about this subject can be found.
 
Also, if You want to remove procedure plans from cache, it can be done with following command:

DBCC FREEPROCCACHE

 On following link more info can be found. 

Wednesday, May 23, 2012

How to access attribute in assembly

Yesterday we had a problem accessing property (attribute) in assembly, which is created over some dll.

For more about assemblies, please read this article.

It is not a problem create function over some method in assembly, but If you want to access some parameter, following trick should be used.

Instead of creating function over parameter name, add prefix get_ before parameter name, as shown in following code:

CREATE FUNCTION dbo.SomeFunction ()
RETURNS [nvarchar](128) WITH EXECUTE AS CALLER
AS
EXTERNAL NAME [AssemblyName].[NamespaceName].get_PropertyName
GO


One of my colleagues helped me with this and I am thankful for that.

Friday, April 6, 2012

SQL Server database states

Because it is necessary to monitor the states of databases in SQL Server, one way of ding this is by executing following SQL command:

select count(*) as NoOnline from master.sys.databases where state <> 0

This script will return the number of databases in SQL Server which are not online.

If we would like to see what is the current state of specific database we can do it with following SQL:

select state_desc from master.sys.databases where name = 'database_name'

Field state_desc can have following database statuses values:

ONLINE
RESTORING
RECOVERING
RECOVERY_PENDING
SUSPECT
EMERGENCY
OFFLINE

Description of these states can be found on following link.

Thursday, March 22, 2012

How to start SQL server from command line, and what happens next

If You want to start SQL Server from command line, go to the desired SQL Server binn folder:

cd \Program Files\Microsoft SQL Server\MSSQL10_50.1\MSSQL\Binn

If there are more SQL Servers installations on a machine, there could a problem if You do not go to the desired location, because there would be more sqlserver.exe instances.

You could start one SQL Server instance from another SQL Server binn folder(with different SQL Server instance name), but that can cause unknown problems.

So, when I go to the desired binn folder, execute

sqlserver -s instancename

What's happening next?

In Windows registry, there is following path:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server

and there is following variable:
InstalledInstances

with instance names which are installed on Your machine. One of these instances has the same name as You entered in previous command which starts SQL Server .

For every of these instances there is a adequate registry path.

For example, one of the instances can have a name SQLEXPRESS, there will be following registry path:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10.SQLEXPRESS

where this last SQLEXPRESS is the name of the instance.

Under this registry key there is a key named MSSQLServer -> Parameters where are the parameters used in process of starting SQL Server instance.

By default there should be 3 parameters:
- d - which represents the path to the master database data file
- l - which represents the path to the master database log file
- e - which represents the path to the error log

Path to the master database is essential because it is the first database which is brought up online, and in this database are data needed for all other database which should be started.


SQL Server error log

To view the SQL Server error log
Start Microsoft SQL Server Management Studio, open Object Explorer, expand a server, expand Management, and then expand SQL Server Logs.

Right-click a log and click View SQL Server Log.

Or execute following statement:
SELECT SERVERPROPERTY('ErrorLogFileName')

You should get something like this:
G:\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Log\ERRORLOG