Sunday, December 27, 2009

Estimating database table sizes using SP_SpaceUsed

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/estimating_database_table_sizes_using_sp_spaceused.htm]

One of Steve McConnell's tips from his great book on estimating (Software Estimation: Demystifying the Black Art) is that you should not estimate that which you can easily count. Estimating database table sizes is a great example of this. Sure, on one hand disk space is relatively cheap; on the other hand you want to know at least a ballpark estimate of how much space your app will need - will database size explode and no  longer fit on your existing SAN?

Here's a general strategy to estimate table size:

1. Determine the general schema for the table

Note the column datatypes that could be huge (like varchar(2000) for notes, or xml, or blob)

2. Find out how many rows you expect the table to contain

Is the table extending an existing table, and therefore proportional to it? For example, do you have an existing "Employee" table with 100,000 records, and you're creating a new "Employee_Reviews" table where each employee has a 2-3 reviews (and hence you're expecting 200,000 - 300,000 records)? If the table is completely new, then perhaps you can guess the rowcount based on expectations from the business sponsors.

If the table has only a few rows (perhaps less than 10,000 - but this depends), the size is probably negligible, and you don't need to worry about it.

3. Write a SQL script that creates and populates the table.

You can easily write a SQL script to create a new table (and add its appropriate indexes), and then use a WHILE loop to insert 100,000 rows. This can be done on a local instance of SQL Server. Note that you're not inserting the total number of rows you estimated -  i.e. if you estimated that table will contain 10M rows, you don't need to insert 10M rows - rather you'll want a "unit size", which you can then multiple by however many rows you expect. (Indeed, you don't want to wait for 10M rows to be inserted, and your test machine may not even have enough space for that much test data).

For variable data (like strings), use average sized data. For null columns, populate them based on how likely you think they're be used, but err on the side of more space.

Obviously, save your script for later.

4. Run SP_SPACEUSED

SP_SpaceUsed displays how much data a table is using. It shows results for both the data, as well as the indexes (never forget the index space).

You can run it as simply as:

exec SP_SPACEUSED 'TableTest1'

Now you can get a unit-size per row. For example, if the table has 3000KB for data, and 1500KB for indexes, and you inserted a 100K rows, then the average size per row is: (3000KB + 1500KB) / 100,000. Then, multiple that by however many rows you expect.

This may seem like a lot of work, and there are certainly ways to theoretically predict it by plugging into a formula. My concern is that it's too easy for devs to miscalculate the formula (like forgetting the indexes, not accounting the initial table schema itself, or just all the extra steps)

5. Estimate the expected growth

Knowing the initial size is great, but you also must be prepared for growth. We can make educated guesses based on the driving factors of the table size (maybe new customers, a vendor data feed, or user activity), and we can then estimate the size based on historical data or the business's expectations. For example, if the table is based on new customers, and the sales team expects 10% growth, then prepare for 10% growth. Of if the table is based on a vendor data feed, and historically the feed has 13% new records every year, then prepare for 13% growth.

Depending on your company's SAN and DBA strategy, be prepared to have your initial estimate at least include enough space for the first year of growth.

6. Add a safety factor

There will be new columns, new lookup and helper tables, a burst of additional rows, maybe an extra index - something that increases the size. So, always add a safety factor.

7. Prepare for an archival strategy

Some data sources (such as verbose log records) are prone to become huge. Therefore, always have a plan for archival - even if it's that you can't archive (such as it's a transactional table and the business requires regular transactions on historical data). However, sometimes you get lucky; perhaps the business requirements say that based on the type of data, you only legally need to carry 4 years worth of data. Or, perhaps after the first 2 years, the data can be archived in a data warehouse, and then you don't worry about it anymore (this just passes the problem to someone else).

Summary

Here's a sample T-SQL script to create the table and index, insert data, and then call SP_SpaceUsed:

USE [MyTest]
GO

if exists (select 1 from sys.indexes where [name] = 'IX_TableTest1')
    drop index TableTest1.IX_TableTest1

if exists (select 1 from sys.tables where [name] = 'TableTest1')
    drop table TableTest1

--=========================================
--Custom SQL table
CREATE TABLE [dbo].[TableTest1](
    [SomeId] [int] IDENTITY(100000,1) NOT NULL,
    [phone] [bigint] NOT NULL,
    [SomeDate] [datetime] NOT NULL,
    [LastModDate] [datetime] NOT NULL
) ON [PRIMARY]

--Index
CREATE UNIQUE NONCLUSTERED INDEX [IX_TableTest1] ON [TableTest1]
(
    [SomeId] ASC,
    [phone] ASC
) ON [PRIMARY]
--=========================================


--do inserts

declare @max_rows int
select @max_rows = 1000

declare @i as int
select @i = 1

WHILE (@i <= @max_rows)
BEGIN
    --=============
    --Custom SQL Insert (note: use identity value for uniqueness)
    insert into TableTest1 (phone, SomeDate, LastModDate)
    select 6301112222, getDate(), getDate()
    --=============

    select @i = @i + 1

END

--Get sizes
exec SP_SPACEUSED
'TableTest1'

 

Sunday, December 13, 2009

How not to estimate

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/how_not_to_estimate.htm]

Being responsible for the end-to-end solution really makes me think about how to estimate. The flipside is that it also makes me think about how not to estimate:
  1. Wild guess - sometimes this seems like your only option, but most of the time there is ways to improve on the guess (base it on similar projects, or split it into components and estimate those individually).
  2. What you think the boss wants to hear - This may seem like the easy way to initially win favor with the boss, but it will come back with a vengeance when the estimate drastically deviates from reality. Also, because the boss always wants to hear lower schedule times, this has a huge bias that will send you off course.
  3. Base it on unrelated projects - Historical data is great, but don't compare apples to oranges. That a WinForm app took 3 months tells you almost nothing about how long an ASP.Net app will take.
  4. Pick an arbitrary big number - During crunch time, it's easy to think that everything will magically be better "next week" or "next month" ("that will give us enough time to fix everything),  but then that time rolls around and the project is still behind schedule.

All of these are bad estimation methods because they miss the fundamental point - how long will the project really take to build in the real world? Wild guesses or the boss's wishes are not necessarily grounded in reality, so basing estimates on them is barking up the wrong tree.

I realize it's easy to say "how not to do something". I'd recommend Steve McConnell's book, Software Estimation: Demystifying the Black Art, for how to do a great job of estimating.

 

Friday, November 27, 2009

BOOK: 97 things every software architect should know

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/book_97_things_every_software_architect_should_know.htm]

A few months back I finished the book 97 Things Every Software Architect Should Know. I've been slow on blogging, so I hadn't gotten around to my standard follow-up post for books I read.

The books consists of 97 short essays, by various accomplished architects, each with a quick insight. It was a casual and fun read, the kind of book that's easy to sneak in a few pages between changing the kids diapers and fixing the house. It's got a lot of good points. I especially recall an essay about how "the database is your fortress" - GUI and Middle Tier apps change, but you will always have your database.

For hard-core architecture, I thought that Microsoft .NET: Architecting Applications for the Enterprise and Patterns of Enterprise Application Architecture were much more systematic and thorough. But overall, it was still a fun read.

Thursday, October 8, 2009

Can you still be technical if you don't code?

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/can_you_still_be_technical_if_you_dont_code.htm]

Can you still be technical if you don't code? A lot of developers have a passion for the technology, do a great job in their current role of implementing solutions (which requires coding), and then get "promoted" into some "big picture" role that no longer does implementation - ironically the thing they did so well at. These higher-level roles often do lots of non-trivial things, but not actual coding. For example:

  • Infrastructure (Servers, SAN space, database access, network access)
  • Design decisions
  • Dealing with legacy code
  • Handling outsourcing, insourcing, consultants
  • Build-vs-buy
  • Vendor evaluations/score card; integrate the vendor's product into your own
  • Coordinate large-scale integration of many apps from different environments
  • Coordination among multiple product life cycles
  • Writing guidance docs
  • Code reviews
  • Occasional prototypes
  • Configuration

On one hand, these types of tasks require technical knowledge in that you wouldn't expect a non-technical person to perform them. On the other hand, they don't seem in the same category as hands-on coding.

What do you think - can you be technical (or remain technical) without actually writing code?

Monday, October 5, 2009

Reasons to NOT put version history in the comments

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/reasons_to_not_put_version_history_in_the_comments.htm]

Some coding standards ask that developers add revision history to the top of the method. Not just the normal "Summary" and "Parameter" tags that can be used to automatically create documentation, but rather a full blown revision log with developer name, date, and comment. This was bigger in the 80's and 90's, when source control and refactoring weren't as common. However, in these days, putting revision history in your comments has some really big problems:
  • Extra effort - It requires extra effort from developers, and usually the tedious kind of effort. It requires manual developer discipline, so architects don't have a good way to enforce this.
  • Bad for refactoring - It discourages refactoring of methods. Say you split a method in two - how do you split up the commented version history?
  • Source control already provides this - It doesn't tell you anything that source control history won't already give you. But even worse, it could be misleading - source control is the true authority, a developer could accidentally type (or forget) the wrong comments. Also, by documenting at the top of the method, it is hard to indicate what changed in the middle (whereas source control diff would instantly tell you).

Perhaps for SQL, I can see the benefit, so that when the DBA runs sp_helptext on a stored proc, they get a quick history (and databases aren't usually refactored like C# code). However, for middle tier code, putting revision history in comments seems like an unwise use of time.

LINK: Comments as version control
 

Thursday, October 1, 2009

What makes something Enterprise?

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/what_makes_something_enterprise.htm]

There's a world of difference between a prototype hammered out over a weekend, and an enterprise app ready for the harsh world of production. Here's a somewhat random brainstorm. In general (there's always an exception), Enterprise apps:

  • Are scalable - they handle large loads and can be called many times.
  • Have a retry strategy - for example, it tries pinging the external service three times before "failing".
  • Have a failover strategy, like an active-passive machine cluster for maximum uptime, and a disaster recovery site.
  • Send notifications.
  • Handles invalid data (like states, zip codes, and numbers).
  • Can integrate with other systems (perhaps providing web service wrappers, or command line APIs, or publicly accessible data repositories that other apps can modify) .
  • Are deployable - "it works on my machine" absolutely does not cut it.
  • Have Logging - this is especially useful for debugging in production, or measuring how many errors (and which types) are thrown.
  • Have long-running process (hours, days, or even weeks) - not just a single thread in memory.
  • Have async processes, which usually means concurrency and threading problems.
  • Support multiple instances of the app running. You can open two copies of word, or run two MSBuild scripts at the same time.
  • Handle product versioning.
  • Care about the hardware it's running on (enough CPU and memory).
  • Have a pluggable architecture - You may need to switch data providers (Oracle/SQL/Xml).
  • Have external data sources (web services, ftp file dumps, external databases).
  • Can scale out, such as adding more web servers, or splitting the database across multiple servers.
  • Have security constraints (both hacking, and functional).
  • Have process that are documented (not just for training, but also for legal auditing and compliance issues).

Much of this code isn't the fun, "glamorous" stuff. However, it's this kind of robustness that separates the "toys" from the enterprise workhorses.

See also: Enterprise Data Access, Enterprise Caching

Tuesday, September 29, 2009

A quick overview of enterprise object caching

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/a_quick_overview_of_enterprise_object_caching.htm]

Caching is a performance mechanism where you store an expensive-to-create value for future consumption. For example, you may cache dropdown values to spare yourself expensive database hits. Caching is one of those buzzwords that everyone knows your application should have, but surprisingly few really do.

Books have been written on caching, so this is just a quick brain dump based on my personal experiences. Also note that I refer to "the database" a lot because it's the main data dependency that most developers can relate to, but really, it could be anything (web service, external file, etc...).

Frontend and Backend Caching

 Frontend UIBackend
Where is it located?Local (in process)Remote (external machines)
ProFaster because it's local - no remote hitHandles updating stale data in distributed systems

Handles any CLR serializable object, independent of the UI layer.

ConDoes not handle updating values - data may be stale

Limited to just HttpContext.

Slower because it's remote - you still need to pay for the remote hit.
ExampleAsp.Net HttpContext.CacheMemcache, Velocity, others...

Obvious follow-up question: "Would you double cache something, taking data from the backend cache and saving it to the fronend cache?" Sure, if it benefited your specific scenario. Ideally the backend cache is totally encapsulated anyway, so your frontend UI developer wouldn't even know if the data they're working with came from a cache or not.

What is a good candidate for caching?

Any data that:

  • Takes a lot of time to create, either due to a remote hit (to a database or web service), or a large calculation time (like querying a million rows).
  • Has a small final result - you query a million rows just to return a single value.

  • Does not change - the problem with caching is stale data.

  • Has minimal dependencies. If your object touches 10 tables for creation, then there's a much greater chance of it becoming stale. This is one benefit of loosely coupled (and then batched) objects, instead of spaghetti code.

  • Has many reads, but very few writes. For example, system-level data (that everyone constantly requests) is good for caching, but employee-level data may not be.

  • Is retrieved externally and requires high uptime - for example you make a web service call to get some value, you call the service again 5 minutes later, the service is down, and you really wish you had even a "stale" copy of that data.

The canonical example would be something like city-state dropdowns. Say it's initially a remote database hit to some "City/State" tables, it returns a small amount of data, it changes infrequently (The US has had 50 states for the last half-century), so it's read many times but not prone to stale data.

What is a bad candidate for caching? Pretty much, the opposite of the good criteria.

Pitfalls with caching

Merely creating the dictionary isn't the problem. The problem will be integrating it (seamlessly) into your data persistence layer, and then ensuring the cache doesn't become stale - especially across a distributed environment - and then making sure it actually improves your performance instead of degrading it.

Stale data is the bane of caching. There are at least a few ways to deal with stale data:

  • Apply a time-out policy such that all data expires after N minutes, but that won't be acceptable for most scenarios.
  • If your app is changing the data (such as updating a details page), then have the DAL method (that calls the update SQL) also ping the cache to clear the stale object. This requires that your application somehow keeps track of which objects depend on which piece of data. It works great with a domain model, which requires more design upfront, but can have huge payoffs.
  • If someone else is directly changing the database, such as a DBA running ad-hoc SQL in production, then consider providing some admin console that lets them clear segments of the cache. For example, if the DBA did a mass-update of all salaries, then have the admin page allow you to flush all salary-related objects out of the cache. This requires some infrastructure for tagging each cached object (perhaps a master config file), and discipline from the DBA to check that admin page.
  • Beware of "database cache dependencies" (ASP.Net 2.0?), that claim to let you apply triggers to the database table, and then automatically clear the appropriate cache items when a specific row/column is updated. I've personally never gotten this to successfully work, have heard lots of horror stories (especially when deploying it across a DMZ), and it forces the domain design into the database instead of the middle tier. Although I'm all ears to anyone who's had a success story here.

Some other things to keep in mind:

  • Where should my cache tie in? Ideally, you'd want the backend cache abstracted via your dataAccess layer. This becomes very feasible with CodeGeneration. Whether data is pulled from cache or the live database is just a tuning option. Just like you don't want to put in-line SQL throughout your codebehind pages, you probably don't want to tightly-couple all your UI code to your cache provider. The frontend cache can be referenced in your UI, but again, be aware of too much plumbing code that "designs you into a corner".

  • Only temporary - A cache is not a persistent data store, it is not merely a "mirror" to scale out your database, as you must account for the cache being cleared. A data access method should always have a means to recreate the cached data.

  • Why use a remote cache? If both the cache and database servers both have remote hits, what's the difference? In its simplest form, this helps scale out the database (which is usually a bottleneck). Every hit on the cache is one less hit on the already-overloaded database. Even after the remote hit, the cache can have a much faster lookup time because while the cache stores a created object, the database may still need to query 1 million rows to collect the data.
  • Control Panel - You'll want to provide an easy way to flush the entire cache, or even segments of the cache, without restarting any servers. It's also great moral support to have a statistics page showing how many thousand (million) database calls have been spared.

  • Configuration - You'll want to provide a way to configure almost everything: the cache durations, what category of duration (short, medium, long), which objects get cached, and perhaps even turn off the entire cache for emergency troubleshooting. Caching is something that you want to tune based on actual production results. Ideally control of what gets cached is all abstracted to a few easy-to-manage config files. You do not want these config values hard-coded throughout your app.

  • Beware of over-caching - If done the wrong way, caching can actually screw your performance. Say you cache something that is continually obsolete, so instead of just doing the normal database hit, you continually also need to do the extra cache query.

Good things to read

There's an endless list of info on caching. Here are some that could be useful.

Yes, there's ton more that can be said about caching. Again, this is just a quick brain dump.