One of the annoying things I find with managing our ASP.NET web applications is that I need to “shut down” our sites when deploying a new version of our assemblies. Its not that I’m worried about visitors seeing our maintenance page for 20-30 minutes, but as search engines spider our site during this down time, they can’t find pages that they knew about before. This is especially annoying when analyzing my sites in Google’s Webmaster Tools and see that I have a bunch of HTTP errors because Google couldn’t find a page because the site was unavailable. Since Google puts a high regard on site availability and quality when determining rankings, I’d like to avoid this.
Deploying the site is actually very simple. We use the XCOPY method of pushing up web forms, controls, and assemblies. But if you just start overwriting files in the live site, users get errors or inconsistent pages. And, if any database changes need to be made, code could not function properly before updating the site. Any of these problems would affect my Google issue I mentioned above as well. Not only that, but any developer worth his/her paycheck tests their changes in the live environment anyway. So just tossing up the new version is no good.
I’ve been considering a few different solutions to this, but I haven’t come up with something that solves the issue completely. At some point, I still have to take down the site.
One solution that I thought of was that I could set up a staging server to deploy my changes to and then run my tests there. Once I’m satisfied with the results, I could push it to my live site, minimizing the amount of down time. I figure the max amount of downtime using this approach would be 5-10 minutes depending on if I had to make any database changes. Not a perfect solution, but better than the 20-30 minutes I’m experiencing now.
Another solution I thought of was to set up a web farm. I could deploy the new version to one web server, make sure everything is good to go, then deploy it to the second web server. Users could still visit the site because at least one server in the farm could handle the incoming request. But this wouldn’t work great if database changes needed to be made. The site itself would still have to come down.
So right now, solution #1 appears to be the best approach and easiest to implement. Maybe I’m making more of a big deal of this than I need to, but I think everyone wants to minimize the downtime of their site. The one reason I’m holding back on changing my approach is I don’t know how much Google or other search engines weigh not being able to access a site for a short period of time. Regardless, I’m curious what other solutions developers and web teams use to deploy their ASP.NET applications to minimize site downtime. I’m positive there is a solution to my problem, but I just haven’t thought of it yet. If anyone has something that works for them, please chime in here!
I run my site on 2 dedicated servers, 1 for web and 1 for the DB. I keep a staging version of my site on the DB server for testing since it is generally handling a lighter load and that way if I require an IIS restart it will not affect my live site. As for database changes, unless you are modifying existing fields in tables there should be ways around it affecting your live site. If I need to change a view or stored procedure, for example, named qryItems I will temporarily create a new one based on the original called qryItems1 which will include the new modifications and change the code on the test site to use this view for testing. I almost never make changes to tables besides adding fields anyhow so this approach has worked well for me. When it comes to actually deploying new assemblies the only downtime the live site has is performing the IIS restart :)
Josh,
Thanks for the suggestions. I’ll definitely try working some of those into my solution.