Wednesday, June 3, 2009

Why would someone put business logic in a stored procedure?

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/why_would_someone_put_business_logic_in_a_stored_procedure.htm]

I have a strong bias against injecting business logic into the stored procedures. It is not scalable, is hard to reuse, is a pain to test, has limited access to your class libraries, etc... However, I think there are some legit cases to put advanced T-SQL in a SP, but you're playing with fire.

Here are some reasons - basically either performance, functionality, or legacy constraints force you to.

  1. Aggregations, Joins, and Filters - If you need to sum up 1000 child rows, it's not feasible to return all that data to the app server and sum it in C#. Obviously using the SQL sum() function would be the better choice. Likewise, if your logic is part of a complex filter (say for a search page), it performs much faster to filter at the database and only pull over the data you need.
  2. Batch calls for performance - SQL Server is optimized for batch calls. A single update with a complex where clause will likely run much faster than 1000 individual updates with simple where clauses. For performance-critical reasons, this may force logic into the stored procedure. However, in this case, perhaps you can code-generate the SQL scripts from some business-rules input file, so you're not writing tons of brittle SQL logic.
  3. Database integration validation - Say you need to ensure that code is unique across all rows in the table (or for a given filter criteria). This, by definition, must be done on the database.
  4. Make a change to legacy systems where you're forced into using the SP - Much of software engineering is working with legacy code. Sometimes this forces you into no-win situations, like fixing some giant black box stored procedure. You don't have time to rewrite it, the proc requires a 1 line change to work how the client wants it, and making that change in the stored proc is the least of the evils.
  5. The application has no business tier - Perhaps this procedure is for a not for an N-tier app. For example, maybe it's for a custom report, and the reporting framework can only call stored procs directly, without any middle-tier manipulation.
  6. Performance critical code - Perhaps "special" code must be optimized for performance, as opposed to maintainability or development schedule. For example, you may have some rules engine that must perform, and being closer to the core data allows that. Of course, sometimes there may be ways to avoid this, such as caching the results, scaling out the database, refactoring the rules engine, or splitting it into CRUD methods that could be batched with an ORM mapping layer.
  7. Easy transactions - It can be far easier for a simple architecture to rollback a transaction in SQL than in managed code. This may press developers into dumping more logic into their procs.

Note that for any of these reasons - consider at least still testing your database logic, and refactoring your SQL scripts.

These "reasons" are probably a bad idea:

  1. Easy deployment - You can update a SQL script in production real quick! Just hope that the update doesn't itself have an error which accidentally screws all your production data. Also consider, why does the procedure need to be updated out-of-cycle in the first place? Was it something that would ideally have been abstracted out to a proper config file (whose whole point is to provide easy changes post-deployment)? Was it an error in the original logic, which could have been more easily prevented if it had been coded in a testable-friendly language like C#? Also, keep in mind that you can re-deploy a .Net assembly if you match its credentials (strong name, version, etc...), which is very doable given an automated build process.
  2. It's so much quicker to develop this way! Initially it may be faster to type the keystrokes, but the lack of testability and reusability will make the schedule get clobbered during maintenance.

In my personal experience, I see more data in the SP for the bad reasons ("it's real quick!")- the majority of the time it can be refactored out, to the benefit of the application and its development schedule.

Monday, June 1, 2009

Chicago Code Camp - reflections

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/chicago_code_camp__reflections.htm]

This weekend the LCNUG and Alt.Net groups hosted a very successful Chicago Code Camp at the College of Lake County in IL. We had about 150 people, 30 speakers, and a lot of knowledge transfer. I was privileged to be part of the group that helped facilitate it. We're in the process of trying to put speaker ppts and code samples up on the website.

Some misc reflections:

How did we make that massively awesome code camp website? We maintained it in ASP.Net, used xml files for the abstracts and speaker bios, and then had an automatic tool take screen scrapes of all the generated aspx pages. This saved the results as html files, which could then easily be xcopy-deployed to any FTP server without worrying about a backend data store or server-side support.

Did you have enough volunteers? I think in the end yes. People stepped up at the spur of the moment. I especially was impressed with how easy it was to move tables when you have 150 people. For lunch and cleanup, we needed to transport a lot of tables and boxes, and random people kept jumping in to help. What a great community!

Sounds great, but too bad I missed it. While we hope to get the ppt  slides up soon, also consider checking out the blogs of the speakers. Even if the blog isn't explicitly listed, you can probably find it by googling the speaker name and adding the ".net" or "developer" keyword to the search.

What's next? Stay in touch with the Chicagoland community. Perhaps subscribe to some of the speaker blogs, or visit the LCNUG, ALT.Net, or any of the other user groups.

See also: 10 Reasons to attend the Chicago Code Camp, Sessions up.

Sunday, May 31, 2009

What is the difference between an average dev and a star?

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/what_is_the_difference_between_an_average_dev_and_a_star.htm]

Time and again, experience keeps re-affirming me that a star developer is far more productive than several average developers combined. It begs the question - what is the difference between a novice, average, and star developer? The naive thinking is that they're all the same kind, but they just vary by degree - i.e. that the star produces more code at a faster rate, with less bugs. But there's so much more than that. The star developer is a fundamentally different kind. Here's a brain dump of some possible differences between star developers and average devs (This also relates to the Dreyfus model of skill acquisition: http://en.wikipedia.org/wiki/Dreyfus_model_of_skill_acquisition).

A star developer:

  1. Mitigates risk
  2. Writes higher quality code
  3. Predicts potential problems, and not design the app into a corner
  4. Addresses performance, security, maintenance
  5. Makes their teammates better
  6. Comes up to speed quicker
  7. Juggles multiple projects
  8. Troubleshoots and figures out answers themselves
  9. Does self correction
  10. Handles non-technical constraints (like a political monkey wrench)
  11. Can provide reliable estimates
  12. Interfaces with non-technical folks (Managers, Business Sponsors); understands the business side
  13. Throws away bad code; knows when to restart
  14. Creates reusable code that helps others
  15. Has influence beyond their immediate project
  16. Desires to grow
  17. Can work for longer periods without immediate and visible success
  18. Can compress the schedule, such as doubling productivity.
  19. Can coordinate other developers, even informally and not as a tech lead.
  20. Can set up and improve process
  21. Understands the concepts beyond syntax. Average devs crash when the technical road takes a curve with a new language; stars know the concepts and don't get thrown for a loop.
  22. Knows when the rules do not apply
  23. Knows where they want to go; can describe their career path.
  24. Is continually learning and improving.
  25. Can point to star developers and industry leads. If you're developing ASP.Net, and you've never even heard of Scott Guthrie, you're probably not a star.

Note that stars does not necessarily:

As a completely miscellaneous aside, I think of a real-time-strategy game like Age of Empires - what is the difference between an average unit and a star unit? Look at an average unit like a simple solder - it has basic properties like health points, attack, and movement. While merely upgrading the strength of these properties (more health, more attack, etc...) is great, it is still fundamentally missing other powerful aspects- the ability to heal itself, a ranged attack, the ability to swim (or fly!), the ability to collect resources or create buildings, etc... For example, a thousand normal AoE soldiers - which have zero ranged attack, and cannot swim - will never be able to hit an enemy across even a 1-block wide river.

Thursday, May 28, 2009

BOOK: Software Estimation: Demystifying the Black Art

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/book_software_estimation_demystifying_the_black_ar.htm]

I personally have never met a developer who liked giving estimates. Indeed, it's an uphill battle. It's not just that developers don't like working on things which don't compile. It's also that amidst endless ambiguity, changing scope, and technical challenges, some boss pressures you into a specific date when a non-specific feature will be fully complete (with no bugs, of course). However, every aspiring tech lead must eventually fight this battle. Because software estimation is perceived as a "soft science", it's easy for devs to view it as some necessary tax to tolerate in order to get to the "real" work of coding. Therefore, many devs never try to improve their estimating skills, and hence there are a lot of bad techniques out there.

That's why I'm glad that superstar Steve McConnell dumped his wisdom into Software Estimation: Demystifying the Black Art. The book is filled with gems. Here are some of the key tips that I found practical:

  • "Distinguish between estimates, targets, and commitments".
  • Once you've given an estimate and schedule, control the project so that you meet the estimate (reduce scope, juggle staff, prioritize features, etc...). The schedule is not like a basketball that you toss in the air and hope that it makes the shot - rather it's like a football that you personally carry to the end zone.
  • "We are conditioned to believe that estimates expressed as narrow ranges are more accurate than estimates expressed as wider ranges." (pg. 18)
  • Steve emphasizes the "Cone of Uncertainty" - namely that initially, the project has many unknowns, and hence the estimate has a much larger range. However, as the project progresses and more issues become known, the range shrinks. Ironically, it is at the beginner of the project (where everything is most unknown), that many bosses want a stable, singular, estimate. Consider using phase-specific estimation techniques for different stages of the project - some techniques work better at the begininng with higher uncertainty, others work better near the end. Also, within the same project phase, consider using multiple estimation techniques to detect for converging (and hence more reliable) estimates.
  • Always have the people who do the implementation work also provide the estimates.
  • Sometimes, you can use group estimates. However, each developer should come up with their estimates separately, else the dominant person in the group will likely overly-influence everyone else (especially because most devs tend to be introverts). If their estimates differ greatly, then discuss why they're different, and iteratively keep re-estimating until they converge.
  • Collect historical data for past projects, which you can then use to assist with future estimates.
  • Count, Compute, Judge. If you can simply count the size somehow (lines of code, modules, etc...) - then that's best. If you cannot directly count, consider computing from things you can count. The point is you always want to avoid subjective judgments when there's a more objective alternative.
  • Try to estimate based off of objective quantities like historical data and lines of code, as opposed to subjective measurements like developer quality. The boss will heckle the subjective measurements: "Your estimate assumes only average developers, but we're paying for senior devs, so re-calculate with senior devs. Great, that saved us 1 month."
  • When the boss doesn't like the estimate, change inputs, NOT outputs. For example, don't just shave a month off your 5-month estimate because your boss will now be more receptive, rather change the inputs (such as reduce features) so that recalculating the estimate now results in 4 months.
  • Establish a standard estimation procedure for your department. By following an objective set of steps, you (the estimator) have a level of "protection" from all the business sponsors who want a lower estimate. For example, you might say "Our standard procedure, which historically returns estimates within a 20% accuracy, always adds 15% risk buffer for this type of application." Then you aren't the "bad guy" for adding the "extra" 15% (funny how anything that increases an estimate is seen as "extra", not "striving for accuracy").
  • When presenting your estimates (and schedules), don't even bother presenting nearly impossible scenarios. The boss will just assume that you'll be lucky and of course hit the most optimistic number that is mentioned.
  • Always give an honest estimate. Don't lowball the estimate just so that you're boss accepts it - doing so will screw you with an impossible schedule, which will also destroy your credibility ("you can't even hit your own estimate!") If an honest estimate is too high, it is the business sponsor's decision to reject the project, not the developers.

Lastly, consider checking out Construx Estimate.

Sunday, May 17, 2009

Tips to refactor Asp.Net codebehinds to a common base page.

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/tips_to_refactor_aspnet_codebehinds_to_a_common_base_page.htm]

In Asp.Net, it is common to have the aspx page inherit from a custom page as opposed to directly from System.Web.UI.Page. Usually the application architecture has at least one common "base page" that handles core architectural components. However, you may find that for larger apps, you want to extend your base-page hierarchy such that an entire sub system gets its own base page (which in turn inherits from the global base page). While inheritance with normal C# classes (like in a ClassLibrary project) works great, there are some common hurdles when trying to refactor to a base page in ASP.Net codeBehinds.

Perhaps the biggest issue is the dependencies, namely:

  1. Your base page cannot directly reference any user controls. Assuming you put the page page in the AppCode folder, this gets compiled to its own separate DLL, and that DLL has no reference to the UC dll (however, the UC has a reference to AppCode).
  2. Your base page cannot directly reference the html controls instantiated in the derived-page's aspx file. For example, if your derived aspx page contains a hidden field or textbox, your base page cannot reference that.

There are ways to break these dependencies.

For problem #1 with the User Control references, you can make an interface, have that user control implement the interface, and then let the base page have a member variable of the interface type. The derived page could populate that variable. In other words:

  • Say you have UserControl "Address".
  • Your base page needs to call the "LookupZipCode" method
  • Create an interface IAddress with member "LookupZipCode". Have the UC implement this interface.
  • Have your base page contain an instance field of type IAddress. Your base page can then call this.
  • Have the derived page pass the UC reference to the base page (perhaps in the OnInit method), for example set base.IAddress = MyUserControl.

Another approach, say if you're casting, is to pass in a generic. For example, create this kind of method in your base page: Note that "T' would be a user control, but your base page cannot reference user controls.

public T HeaderControl<T>() where T : class
{
    return this.Master.FindControl("Header") as T;
}

For problem #2, with html control references, you could solve this in several ways. Keep in mind that unlike user controls, Html (or WebControls) are predefined in an external assembly - System.Web - and can hence be referenced in the base page. So the question becomes how to pass the reference from the derived to the base class?

  • Pass in references via method signature - for example the method that sets a hidden control expects an HtmlInputHidden parameter, as opposed to referencing "this.MyHiddenControl".
  • Make the base class have its own separate protected instance, and have the derived class set this (such as in the derived class's OnInit). For example, the derived class OnInit method may contain a line like "base.BaseHiddenField = thisMyHiddenField". This is useful if that control is referenced in many base-class methods, and you don't want to update all the signatures.
  • FindControl - You can always use the "Page.FindControl(string)" method to get a control given the string ID, but this is brittle and slower. Because you pass in a string, you don't get compile-time type checking. It also must search the entire control collection instead of just having a direct reference to the control.

While there's a lot more ways to refactor an ASP.net codeBehind, these two tricks are useful.

Thursday, May 14, 2009

10 Reasons to attend the Chicago Code Camp

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/10_reasons_to_attend_the_chicago_code_camp.htm]

The LCNUG and Alt.Net groups are hosting a Chicago Code Camp on Saturday, May 30th, at College of Lake County (CLC) in the Northwest suburbs. If you're a passionate developer looking to grow, this is exactly the kind of event you want to consider.

  1. 33 cutting-edge sessions across a wide spectrum of topics and developer communities.
  2. Presentations by industry leaders, MVPs, experiences developers, and tech leads.
  3. Networks with perhaps 200 local developers, which always results in great discussions with your peers.
  4. It's free!
  5. Tons of swag.
  6. Get to ask face-to-face questions with real experts.
  7. A full day of sessions - so don't skip out just because you can only make the morning or afternoon.
  8. See what new technologies could add value to your development projects.
  9. Lots of hands-on code, not just fluffy power points about ivory tower theories.
  10. You will be a better developer for having gone.

We're also looking for volunteers. Consider helping out, it's a great way to get involved.

Wednesday, May 13, 2009

Troubleshooting Visual Studio Profiler with ASP.Net

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/troubleshooting_visual_studio_profiler_with_aspnet.htm]

Visual Studio 2008 (and I think 2005) come with a built-in .Net profiler. Profilers are especially useful for troubleshooting performance bottlenecks (Related: CLR Profiler for old versions of Visual Studio; SQL Profiler). There are already good tutorials on it, so here I'm just offering misc tips and troubleshooting. Basically, you can go to Analyze > "Launch Performance Wizard", and just select all the defaults. You can use this with ASP.Net web projects too.

Error: When trying to start, I got "Value does not fall within the expected range".

Solution: You can set the config of a website to "Any CPU".

Error: VSP1351 : The monitor was unable to create one of its global objects

Other times I got this error:

Error VSP1351 : The monitor was unable to create one of its global objects (Local.vsperf.Admin). Verify that another monitor is not running, or that another application hasn't created an object of the same name.
Profiler exited
PRF0010: Launch Aborted - Unable to start vsperfmon.exe

Solution: You can use Process Explorer to see what is locking "vsperf.Admin", and then kill that process tree. For me, it was the aspnet_wp.exe process.

Error: PRF0010: Launch Aborted - Unable to start vsperfmon.exe

Sometimes I got this error:

PRF0010: Launch Aborted - Unable to start vsperfmon.exe
Error VSP1342 : Another instance of the Logger Engine is currently running.
Profiler exited

Solution: I restarted Visual Studio (yes, it's lame, but it appeared to work). I also ensured that all other instances of VS were closed.

Error: Sometimes clicking the stop button caused IE and VS to crash, without even collecting data

Solution: By just closing the IE browser directly, it worked.