Showing posts with label sql. Show all posts
Showing posts with label sql. Show all posts

Wednesday, May 19, 2010

Using SMO to automate SQL tasks

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/using_smo_to_automate_sql_tasks.htm]

I used to use shelling out to the console with osql, or splitting out all the "GO" keywords and using ADO.Net to execute non-queries.

And years ago, I guess that was fine.

But Server Management Objects (SMO) are phenomenal. They just work. With a few lines in C#, you can create or kill a database, install schema scripts, enumerate the database, and much more. You also get exceptions thrown if there is an error (much easier than parsing the osql output from the command line).

Good stuff.

Sunday, December 27, 2009

Estimating database table sizes using SP_SpaceUsed

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/estimating_database_table_sizes_using_sp_spaceused.htm]

One of Steve McConnell's tips from his great book on estimating (Software Estimation: Demystifying the Black Art) is that you should not estimate that which you can easily count. Estimating database table sizes is a great example of this. Sure, on one hand disk space is relatively cheap; on the other hand you want to know at least a ballpark estimate of how much space your app will need - will database size explode and no  longer fit on your existing SAN?

Here's a general strategy to estimate table size:

1. Determine the general schema for the table

Note the column datatypes that could be huge (like varchar(2000) for notes, or xml, or blob)

2. Find out how many rows you expect the table to contain

Is the table extending an existing table, and therefore proportional to it? For example, do you have an existing "Employee" table with 100,000 records, and you're creating a new "Employee_Reviews" table where each employee has a 2-3 reviews (and hence you're expecting 200,000 - 300,000 records)? If the table is completely new, then perhaps you can guess the rowcount based on expectations from the business sponsors.

If the table has only a few rows (perhaps less than 10,000 - but this depends), the size is probably negligible, and you don't need to worry about it.

3. Write a SQL script that creates and populates the table.

You can easily write a SQL script to create a new table (and add its appropriate indexes), and then use a WHILE loop to insert 100,000 rows. This can be done on a local instance of SQL Server. Note that you're not inserting the total number of rows you estimated -  i.e. if you estimated that table will contain 10M rows, you don't need to insert 10M rows - rather you'll want a "unit size", which you can then multiple by however many rows you expect. (Indeed, you don't want to wait for 10M rows to be inserted, and your test machine may not even have enough space for that much test data).

For variable data (like strings), use average sized data. For null columns, populate them based on how likely you think they're be used, but err on the side of more space.

Obviously, save your script for later.

4. Run SP_SPACEUSED

SP_SpaceUsed displays how much data a table is using. It shows results for both the data, as well as the indexes (never forget the index space).

You can run it as simply as:

exec SP_SPACEUSED 'TableTest1'

Now you can get a unit-size per row. For example, if the table has 3000KB for data, and 1500KB for indexes, and you inserted a 100K rows, then the average size per row is: (3000KB + 1500KB) / 100,000. Then, multiple that by however many rows you expect.

This may seem like a lot of work, and there are certainly ways to theoretically predict it by plugging into a formula. My concern is that it's too easy for devs to miscalculate the formula (like forgetting the indexes, not accounting the initial table schema itself, or just all the extra steps)

5. Estimate the expected growth

Knowing the initial size is great, but you also must be prepared for growth. We can make educated guesses based on the driving factors of the table size (maybe new customers, a vendor data feed, or user activity), and we can then estimate the size based on historical data or the business's expectations. For example, if the table is based on new customers, and the sales team expects 10% growth, then prepare for 10% growth. Of if the table is based on a vendor data feed, and historically the feed has 13% new records every year, then prepare for 13% growth.

Depending on your company's SAN and DBA strategy, be prepared to have your initial estimate at least include enough space for the first year of growth.

6. Add a safety factor

There will be new columns, new lookup and helper tables, a burst of additional rows, maybe an extra index - something that increases the size. So, always add a safety factor.

7. Prepare for an archival strategy

Some data sources (such as verbose log records) are prone to become huge. Therefore, always have a plan for archival - even if it's that you can't archive (such as it's a transactional table and the business requires regular transactions on historical data). However, sometimes you get lucky; perhaps the business requirements say that based on the type of data, you only legally need to carry 4 years worth of data. Or, perhaps after the first 2 years, the data can be archived in a data warehouse, and then you don't worry about it anymore (this just passes the problem to someone else).

Summary

Here's a sample T-SQL script to create the table and index, insert data, and then call SP_SpaceUsed:

USE [MyTest]
GO

if exists (select 1 from sys.indexes where [name] = 'IX_TableTest1')
    drop index TableTest1.IX_TableTest1

if exists (select 1 from sys.tables where [name] = 'TableTest1')
    drop table TableTest1

--=========================================
--Custom SQL table
CREATE TABLE [dbo].[TableTest1](
    [SomeId] [int] IDENTITY(100000,1) NOT NULL,
    [phone] [bigint] NOT NULL,
    [SomeDate] [datetime] NOT NULL,
    [LastModDate] [datetime] NOT NULL
) ON [PRIMARY]

--Index
CREATE UNIQUE NONCLUSTERED INDEX [IX_TableTest1] ON [TableTest1]
(
    [SomeId] ASC,
    [phone] ASC
) ON [PRIMARY]
--=========================================


--do inserts

declare @max_rows int
select @max_rows = 1000

declare @i as int
select @i = 1

WHILE (@i <= @max_rows)
BEGIN
    --=============
    --Custom SQL Insert (note: use identity value for uniqueness)
    insert into TableTest1 (phone, SomeDate, LastModDate)
    select 6301112222, getDate(), getDate()
    --=============

    select @i = @i + 1

END

--Get sizes
exec SP_SPACEUSED
'TableTest1'

 

Wednesday, June 3, 2009

Why would someone put business logic in a stored procedure?

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/why_would_someone_put_business_logic_in_a_stored_procedure.htm]

I have a strong bias against injecting business logic into the stored procedures. It is not scalable, is hard to reuse, is a pain to test, has limited access to your class libraries, etc... However, I think there are some legit cases to put advanced T-SQL in a SP, but you're playing with fire.

Here are some reasons - basically either performance, functionality, or legacy constraints force you to.

  1. Aggregations, Joins, and Filters - If you need to sum up 1000 child rows, it's not feasible to return all that data to the app server and sum it in C#. Obviously using the SQL sum() function would be the better choice. Likewise, if your logic is part of a complex filter (say for a search page), it performs much faster to filter at the database and only pull over the data you need.
  2. Batch calls for performance - SQL Server is optimized for batch calls. A single update with a complex where clause will likely run much faster than 1000 individual updates with simple where clauses. For performance-critical reasons, this may force logic into the stored procedure. However, in this case, perhaps you can code-generate the SQL scripts from some business-rules input file, so you're not writing tons of brittle SQL logic.
  3. Database integration validation - Say you need to ensure that code is unique across all rows in the table (or for a given filter criteria). This, by definition, must be done on the database.
  4. Make a change to legacy systems where you're forced into using the SP - Much of software engineering is working with legacy code. Sometimes this forces you into no-win situations, like fixing some giant black box stored procedure. You don't have time to rewrite it, the proc requires a 1 line change to work how the client wants it, and making that change in the stored proc is the least of the evils.
  5. The application has no business tier - Perhaps this procedure is for a not for an N-tier app. For example, maybe it's for a custom report, and the reporting framework can only call stored procs directly, without any middle-tier manipulation.
  6. Performance critical code - Perhaps "special" code must be optimized for performance, as opposed to maintainability or development schedule. For example, you may have some rules engine that must perform, and being closer to the core data allows that. Of course, sometimes there may be ways to avoid this, such as caching the results, scaling out the database, refactoring the rules engine, or splitting it into CRUD methods that could be batched with an ORM mapping layer.
  7. Easy transactions - It can be far easier for a simple architecture to rollback a transaction in SQL than in managed code. This may press developers into dumping more logic into their procs.

Note that for any of these reasons - consider at least still testing your database logic, and refactoring your SQL scripts.

These "reasons" are probably a bad idea:

  1. Easy deployment - You can update a SQL script in production real quick! Just hope that the update doesn't itself have an error which accidentally screws all your production data. Also consider, why does the procedure need to be updated out-of-cycle in the first place? Was it something that would ideally have been abstracted out to a proper config file (whose whole point is to provide easy changes post-deployment)? Was it an error in the original logic, which could have been more easily prevented if it had been coded in a testable-friendly language like C#? Also, keep in mind that you can re-deploy a .Net assembly if you match its credentials (strong name, version, etc...), which is very doable given an automated build process.
  2. It's so much quicker to develop this way! Initially it may be faster to type the keystrokes, but the lack of testability and reusability will make the schedule get clobbered during maintenance.

In my personal experience, I see more data in the SP for the bad reasons ("it's real quick!")- the majority of the time it can be refactored out, to the benefit of the application and its development schedule.

Wednesday, March 4, 2009

Refactoring SQL code with file includes and variables

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/refactoring_sql_code_with_file_includes_and_variables.htm]

Everyone who maintains code knows that duplicated code is bad. While OOP languages like C# (and even xml "languages" like MSBuild) provide ways to refactor, SQL seems lacking in such features. If you run a similarity-analyzer (like Simian), you can probably see large amounts of duplicated code. Two common refactoring techniques to help with this would be:

  • Dynamic include – Include one snippet within another. For example, we could have a chunk of code that is duplicated in a stored proc, and a table function.
  • Variables – We can abstract any SQL code (table names, sql commands) to a variable.

Note that mere stored procs or user-defined-functions are insufficient, as they can’t handle all snippets (like declaring variables which are used in the calling function), or they have awful performance in the where clause.

We can use a technology “SqlCmds” to accomplish this. (http://msdn.microsoft.com/en-us/library/aa833281.aspx).

How to enable Sql Commands in SqlStudio:

http://msdn.microsoft.com/en-us/library/ms174187.aspx

  • Single query window – “On the Query menu, click SQLCMD Mode.”
  • For all windows – “To turn SQLCMD scripting on by default, on the Tools menu select Options, expand Query Execution, and SQL Server, click the General page, and then check the By default open new queries in SQLCMD Mode box.”

How to enable Sql Commands in Visual Studio

This requires the database edition of Visual Studio. Click the allow "SqlCmd" button on the tool bar.

Basic variable test

--set variable, and then use it - use the ":setvar" command
:setvar SomeTable TestCodeGen
select * from $(SomeTable)

 

-- environmental variables too!
select '$(COMPUTERNAME)' --returns my comp name (like 'TimStall2')

This means we could have an external script set the environmental variables (like the PrimaryDataBase), and then easily re-run those in the SQL Editor. Note that you can use the free tool SetX.exe to set environmental variables.


File Include – Basic (use the “:r” command)

--File 1 (var_def.inc):
:setvar PrimaryDB MyDatabase70


--File 2:
:r C:\Development\var_def.inc
select * from $(PrimaryDB).MySchema.TestCodeGen


For example, we could have a “header” file that includes a bunch of variable definitions (like all the PrimaryDB, ReportDB, etc…), and then include it wherever needed. Or, we could include any other SQL snippet. For example, we could use this to effectively make private functions (instead of only have global functions) that are encapsulated to a single stored proc.

File Include – avoid function in a where clause

--File 1 (myProc_func1.sql):
--some reusable snippet (note how is uses the variable '@myNum")
co = '1234' or NumberInteger > @myNum

--File 2:
declare @myNum integer
select @myNum = 10

select * from TestCodeGen
where
:r C:\Development\myProc_func1.sql
and LastChangedBy < GetDate()



Summary
 

One catch to all of this is that if you have your own database installation code via ADO.Net, you need to manually parse it yourself. However, that should be easy enough to do given the strict syntax of the SqlCmds.


Note that this is substituted “at compile time”. If you run the SQL Profiler, you won’t see the “:setvar” or “:r”, but rather the content already parsed. These techniques could be used to help refactor SQL code, just like similar techniques help refactor the code in other languages.

 

Tuesday, August 7, 2007

Cloning a temp table schema in SQL

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/cloning_a_temp_table_schema_in_sql.htm]

SQL temp tables can be a functional and performance life-saver because they let you store intermediate results. You can then do resource-heavy calculations on this smaller result set, or just break a complex procedure into multiple, easier-to-handle parts.  Sometimes you'll have two temp tables with the same schema. SQL provides as easy way to clone a temp table (thanks to our Paylocity SQL architect for showing me this). Here's a sample:

CREATE TABLE #Main1 --prePaging
(
  colInt int,
  colString varchar(20)
)

Insert Into #Main1 Values(10, 'aaa')
Insert Into #Main1 Values(20, 'bbb')
Insert Into #Main1 Values(30, 'ccc')

--Dynamically create a close of #Main1, call it #Main2
--This doesn't touch data
Select * Into #Main2 From #Main1 Where 1=2

--show that Main2 exists, and has the right schema
Select * From #Main1
Select * From #Main2

--Now insert some data into Main2
Insert Into #Main2 (colInt, colString) Values(40, '
ddd')
Select * From #Main2

drop table #Main1
drop table #Main2

The key is the line "Select * Into #Main2 From #Main1 Where 1=2", which dynamically creates table Main2 and sets its columns equal to that of Main1.

 


Living in Chicago and interested in working for a great company? Check out the careers at Paylocity.

Monday, August 6, 2007

Easily insert huge amounts of test data

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/easily_insert_huge_amounts_of_test_data.htm]

I've plugged the free MassDataHandler tool before - an open source tool that lets you use XML script to easily insert data. One limitation of the tool (perhaps to be solved in future releases) is the inability to easily insert mass amounts of data.

 

Say you want a table to have 10,000 rows so you can do some SQL performance tuning. Ideally you could specify some algorithm to dictate the number of rows and how each row of data is unique. For example, you may want to say "Insert 10,000 rows of company data, using different company names of the form 'co' + i.ToString(), such as co1, co2, co3, etc...".

 

You can easily do this. First you could use the MDH to insert the parent data. Then for specific high-volume tables, you could use the SQL while loop to specify the insert strategy, like so:

Declare @i int select @i = 1  WHILE (@i <= 10000) BEGIN    --Define the dynamic data to insert   Declare @co varchar(10)   select @co = 'co' + cast(@i as varchar(4))    --Do the SQL insert   Insert into MyTable (    [co],     [SomeColumn] )   Values(    1,     @co,     'someData' );    --increment the counter   select @i = (@i + 1) END

The would quickly insert 10,000 rows of test data into MyTable. You could customize the technique for other tables, adding multiple inserts in the loop, or adjusting for a multi-column unique index.

 


Living in Chicago and interested in working for a great company? Check out the careers at Paylocity.

Thursday, May 31, 2007

How to tune a SQL script

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/how_to_tune_a_sql_script.htm]

Performance is critical, and slow SQL procs are often a huge performance bottleneck. The problem is how to measure it. Microsoft provides SQL Profiler to help with that.

While I'm certainly no DBA, here's a basic tutorial on how to tune a SQL script (check here for more on SQL profiler).

1. Get a database with production-sized data.

Because performance can very exponentially (i.e. there may be twice as much data, but it goes twenty times slower), you absolutely need to test with production-sized data, else all your measurements could be off.

2. Be able to run the SQL script in an isolated, deterministic environment

We don't want to chase ghosts, so make sure you have a deterministic environment: (A) no one else is changing the script-under-test, (B) you can call the script with a single SQL command (like exec for a SP, or select for a function), (B) you can call the script repeatedly and get the same functional result every time. Once the script works, we can make it work fast.

3. Open up SQL Profiler for a tuning template.

SQL profiler lets you measure how fast each SQL command took. This is invaluable if you have a complicated script with many sub-commands. It's almost like stepping through the debugger where you can evaluate line-by-line.

  1. Open up SQL Profiler (either from SQL Studio > Tools > SQL Server Profiler, or from the Start > Programs menu).
  2. In SQL Profiler, go to File > New Trace, and connect as the SA user.
  3. In the "Use the template", specify "Tuning"
  4. Open Profiler
  5. Profiler starts recording every command being sent to the database server. To filter by the specific SPID that you're running your SP from, run the SP_WHO command in your SQL Studio window to get the SPID, and then in SQL profiler:
    1. Pause the SQL Profiler trace
    2. Goto File > Properties
    3. A new window opens up, go to the "Events Selection" tab
    4. Select SPID, and in the filter on the right enter the value into the "Equals" treeview option.
    5. Filter SPID

 

4. Run your SQL statements and check the Profiler

Simply run your SQL statements in SQL studio, and check the results in SQL profiler.

The tuning template in profiler will record every command and sub-command being run. It will show the total duration (in milliseconds) for each line, and the full SQL text of what was run. This allows you to identify the bottlenecks, and tune those by changing the SQL code to something more optimal.

Trace

5. Compare the output to ensure same functionality

If you don't have an exhaustive suite of database tests, you can still help ensure that your proc is functionally equivalent by comparing the original SQL output (before tuning) to the new SQL output (after tuning). For example, you could save the output resultset as a file and then use a file-diff tool like Beyond Compare to ensure they're identical.

Summary

Again, books could be written on how to SQL tune. This is just a brief high-level tutorial to get you started.

 


Living in Chicago and interested in a great company? Check out the careers at Paylocity.

Sunday, July 9, 2006

SQL Server 2005 has try-catch

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/sql_server_2005_has_trycatch.htm]

I used to do error checking in SQL Server by adding a check after each statement, something like this:

 if @@error != 0
    begin
    RAISERROR 50001 'Some error message'
    RETURN
end

This has obvious problems (lots of extra code, what if you miss an error message, etc...). However, SQL Server 2005 has try-catch statements!

BEGIN TRY
{ sql_statement | statement_block }
END TRY
BEGIN CATCH
{ sql_statement | statement_block }
END CATCH
[ ; ]

This is definitely a convenient thing.