Business has been booming for my little custom software company. We're not ready to take over the world or release a product just yet, but things are doing OK. I'm happy. And I've been pondering the means by which I've managed to grow the business. Sometimes it all seems like a happy accident. But after a talk with a good friend and colleague, I think I understand what has happened.
How does any business get customers? By advertising? Nobody pays attention to that anymore. By clever PR? Sure, there's always the odd person who will hire you because he saw that newspaper article about how you donated 100 hours of your time to build a website that takes donations to help cure those poor kids in the oncology ward of the local hospital. But as I talk to other solo and small businesspeople, an interesting trend reveals itself: the #1 way small firms get business is by utilizing Face Time and Free Stuff.[ChristopherHawkins.com Blog]
Tuesday, January 31, 2006
Thursday, January 26, 2006
There’s an interest web page called “Firefox Myths” that takes some of the more common claims associated with Firefox (it’s faster, it’s more secure, it’s a floor wax and a dessert topping) and examines each claim. While I’m sure that busting all of those myths is going to bug a lot of people who prefer Firefox, I think he’s dead on.
While I like the idea of Firefox, it’s implementation bugs me. It’s slower than IE and much slower than Opera. IE gets a startup boost by being part of the OS, Opera is just fast. I also prefer the Opera UI, it handles the MDI tabs much better than Firefox does. You can extend Firefox to get the same functionality with extensions, but I prefer that nice shiny out of the box experience.
I do have Firefox running on the iLamp. It’s arguably less annoying than Safari.
I fully admit to being an Opera bigot. It’s faster and more secure than the other two, and I prefer the UI. The latter reason is purely subjective, but the other two reasons are real. There are plenty of sites that don’t fully support Opera, it’s mainly a sin of ommission. Opera supports the web standards as the others and it lets you change the user agent string on the fly to get past some of the sites that don’t allow Opera.
And the people behind Opera have a good sense of humor. Three years ago, Microsoft’s MSN portal changed it’s code and deliberately excluded Opera users. When Opera Software found out about, they complained to MS and then released the “Bork” edition of Opera. It worked just like the regular edition, unless you were visiting the MSN site, there it would translate all of the text into the language of the famous Swedish Chef from the Muppet Show: Bork, Bork, Bork!
I have disabled comments on this post because S/N ratio was starting to drop. There were some good comments, but I didn't want this post to turn into a "Bash FireFox" post. That was never my intent.
But is a .NET web service thread safe when it's not running inside IIS? If 10 clients call the same method at the same time, are threads spawned to handle each request, are they just queued up, or does it just collapse like a house of cards? Googling for clues turned up very little (for once). I did come across a posting from Mark Fussell, the WSE Program Manager at Microsoft, where he states that it's not thread safe but I may be reading that in the wrong context. On a side note, do the web service endpoints in SQL Server 2005 have the same limitation?
When I get some time, I'll build a simple WSE 3 based web service and and blast it from multiple targets and see what happens. Between Ethereal and log4net, I should get some metrics out of it.
To help protect your security, Internet Explorer has restricted this file from showing active content that could access your computer. Click here for options...
This appears in the Information Bar when the active content is blocked from running in the Local Computer zone. This was added to XP with SP2 as part of Microsoft's security initiative. While this is generally a good thing, I do find it annoying. I spent a lot of time viewing XML recently while working on a C# service to collect GPS data from several vendors. I was tracking down a discrepancy with a live feed and this warning became a distraction. After solving the actual problem with the GPS feed, I moved on to the next item and forgot about the warning message.
Tonight I attended a great WSE 3.0 session at TVUG. The presenter was Julie Lerman and she did a great job and I learned a few things. But that's another blog entry to come. While demonstrating how the XML logging works, she would display the XML output files through IE. She kept getting the warning message in Information bar and asked if anyone knew how to get rid of it. After a little googling, I found out how to disable that message.
Select "Internet Options" from the IE "Tools" menu. Select the "Advanced" tab, and scroll down to Security and check the box "Allow active content to run in files on My Computer" and click the "Apply" button. And if Bob's your uncle, you will no longer see that particular warning message. For the curious, MS has a detailed list of the Information Bar messages as KB article 843017.
Tuesday, January 24, 2006
Jeff Atwood has a good rant about people asking for web sites that scale to devices like PDA’s or crackberry’s. He takes the view that it would be nice if every web site could scale from cell phone size to desktop, but that it’s not always a realistic request.
I’m with Jeff on this. While it’s fairly straight forward to parse a browser’s user agent string and redirect to the appropriate destination, you have to make a risk/reward assessment on the cost of implementing your site that way. Just because it’s posible to do something, it doesn’t mean that it’s a practical thing to do. A site like Google, that’s an obvious choice to run at any size. You can use a search engine without any graphics and a minimal amount of actual typing.
That choice doesn’t always make sense. For example, our company’s new product is a web based school bus tracking application. Microbrowser compatibility was not even on the radar. We know who our initial client base will be and what they will be using. And viewing live feeds of where their fleet currently is really doesn’t play well at 176x220. And we know that this application will be run from transportation and administration offices, not out in field. The time it would take to develop an alternate presentation layer would take away from the development time needed for other parts of the application.
I’m not buying the argument that you can handle the grunt work with CSS, it’s a non-trivial task to support handheld browsers. That market is very fragmented right now with the choices of browers. Add handhelds to the testing mix, and your QA costs just suddenly became a lot more expensive. You have usage plan costs to eat, various models of handhelds that have to be available for testing, etc.
Millions of Web users are out there with cell phones. If you don’t get your site to work properly with a cell phone, you’re turning away customers and that sucks. It’ll get called out.
Comments like that are why I stopped reading Scoble too.
If we had actual demand from our customers for microbrowser support, of course we would consider implementing that level of functionality. It boils down that we are not going to lose customers right now if we only support desktop browsers.
Monday, January 23, 2006
With .NET, I keep hearing “Don’t use exceptions, they’re expensive” and I have always wondering how true it was. I’ve been in the camp of using exceptions when you need them and don’t worry about the performance cost. You want to use them for handling situations where something completely unexpected is happening in the code. I don’t believe in using them to return an error condition in code, that’s what function return values are for.
But how expensive are they? It’s all relative, but it turns out that it’s not that painful to use them performance wise. Jon Skeet has a simple bit of code that benchmarks how expensive it is to call exceptions and you can read about in this article. The code is pretty simple and the results are only relative to his machine, but he was handling exceptions in the range of 40 to 188 exceptions per millisecond. Seems quick enough for me, but your mileage may vary. He noted that running the code inside the debugger will run much slower (many seconds versus a few milliseconds) for the processing of exceptions. It’s the performance hit from running the code through the debugger that probably led people to think that exceptions are expensive.
There are some caveats with Jon’s example. He calls the same exception repeatedly, it may be more expensive to call many different examples than the same one over and over. His example was in the Main() function, it wasn’t nested at all. The deeper, you nest the code, the more expensive the hit is.Technorati Tags: C# .NET exceptions
Tuesday, January 17, 2006
I'm getting ready to help repave a family member's PC (Windows XP). Too many questionable things have been installed and some nasty thing is blocking https pages and pages that lead to diagnostic tools. When something is actively blocking Sysinternals, then you know you have something malevolent on board. The usual suspects haven't been able to clean it, so it's time to sanction the spyware/hijackers/cruft with extreme prejudice.
As part of the OS installation, I'm going to push to have the user accounts created with limited user access (LUA). That should help keep the nasties from getting a toehold again. There’s a good article on why you should do this on Aaron Margosis’s blog with a tool named MakeMeAdmin. This will let the users run as LUA, but with the ability to launch a command shell with admin rights. For more information on why you should run with LUA, check out the rest of Aaron’s posts, starting at the top.
The Microsoft Solutions for Security and Compliance group (MSSC) has just released a white paper about the principles behind LUA, it’s a good starting place. You can download it from "Applying the Principle of Least Privilege to User Accounts on Windows XP" or view it online right here.
Scott Hanselman has a checklist for the après repaving before the machine is really usable. It's geared for a dev type of box, but the concept can be applied to civilian uses. For dev boxes, Scott also has the Ultimate Developer and Power User Tools List, which is pretty cool.
Another good checklist is at AngryPets.com, I like the idea of having base images to restore to. That would save a lot of time. CNet Australia has a checklist for the top ten things to do before connecting to the Internet. Oddly enough, I couldn't find that article on their US site.
If the repaving goes relatively painless, I may do the same for my home PC. It's slowly filling up with more cruft. On the other hand, it's running just fine so I probably wont repave until something seriously breaks.
[edited on 1/19/06 and 2/7/06]
1997 – Taking the advice of hallucinating industry analysts, Corel decides to rewrite all their applications, including WordPerfect, in Java. The end result is the first known word processor that is slower to use than a typewriter.This is a companion piece to his salute to Basic.
Thursday, January 12, 2006
We're on a bit of an Outlook bender this morning - for those of you who live in the look, Microsoft offers a complete list of Outlook keyboard shortcuts. A few good ones:
- Enter a name in the Find a Contact box F11
- Switch to Mail CTRL+1
- Switch to Calendar CTRL+2
- Switch to Contacts CTRL+3
- Switch to Tasks CTRL+4
- Switch to Notes CTRL+5
- Switch between the Folder List and the main Outlook window F6 or CTRL+SHIFT+TAB
- Create a new Message CTRL+SHIFT+M
- Switch to Inbox CTRL+SHIFT+I
- Switch to Outbox CTRL+SHIFT+O
Monday, January 09, 2006
Wednesday, January 04, 2006
It's been around in various incarnations for a couple of years now and there is a fairly active community of 3rd party firmwares for it and it's older brother the WRT54G. Until quite recently, it was built on a Linux platform. Linksys was forced to release the source code to stay in compliance with the GPL. Lots of talented programmers took the ball and ran from it, adding many new features. Even Earthlink released a customized firmware that add IP6 support to the router.
The latest versions are no longer Linux based, they run a customized VxWorks firmware that provides similiar functionality as the stock WRT line, but requires less ram. This version is not currently compatible with the 3rd party firmwares. There are still Linux based ones out in the retail market, you can tell from the outside of the box. Each version has a slightly different serial number and that will identify the hardware version, based on the list on this page.
These custom firmwares come in many flavors. Some are based off of the current Linksys firmwares and add some new functionality. Others have veered quite a bit from the path and do not look or behave like the original firmware. I'm using the DD-WRT firmware and it works great. I'm only using a fraction of the added functionality, but the stuff it adds is pretty cool.
I use the site survey function on router to see what other routers and/or access points are active near me. I'm in a small residential neighborhood and I can usually see 7 to 10 wireless routers. That's where the fun begins.
In the US, there are 11 channels reserved for 802.11b and 802.11g. The problem is that the channels overlap each other. Only channels 1, 6, and 11 are distinct channels. For the best reception on your network, you want to be on one of those channels and with the fewest number of neighbors on the same channel. The site survey feature in the DD-WRT firmware will display each router it finds, along with it's channel and signal strength.
Most of the routers will be at channel 6, that seems to be the default channel that the routers use out of the box. So I'll use channel 1 or 11, depending on which one has the least amount of traffic. And of course, there's usually some one using one of the overlap channels like 9 or 2.
Another feature added by DD-WRT (and most of the 3rd party firmwares) is control over the power to the transmitter. I saw a small increase in signal strength with the client PC's by raising the power setting a bit. I got a bigger boost by using Linksys's HGA7T High Gain Antennas. You basicly replace the little stubby antennas that come with the WRT with ones that are about twice the size. You can find them on eBay or at Walmarts for around $40. It's worth the money if you have a weak signal between the router and the PC.
There are a few ways to do this, one way I like to do it is with a stored procedure that generates the data and sends back only what I need. This eliminates most of the storage requirements on the web side of things. Using a sample table, I'll write a procedure that lets me grab the data by row numbers.
create table Employee (
RecordID integer identity(1,1),
Assume about 1000 or so records, with about 50% of them with the IsDriver field set to 'Y'. Here's a procedure for getting a set of data with row numbers included.
CREATE procedure QueryByRow
DECLARE @MatchCount int
SELECT @MatchCount =
WHERE IsDriver='Y' AND LastName LIKE @LastName
DECLARE @tmp TABLE(ID int identity(1,1), RecordID int, LastName varchar(20), FirstName varchar(20))
SET NOCOUNT ON
INSERT @tmp(RecordID, LastName, FirstName)
SELECT RecordID, LastName, FirstName
WHERE IsDriver='Y' AND LastName LIKE @LastName
ORDER BY LastName, FirstName
SET NOCOUNT OFF
SELECT @MatchCount AS Count, t.RecordID, t.LastName, t.FirstName
FROM @tmp t
WHERE t.ID BETWEEN @StartRecord AND @EndRecord
This procedure only allows us to filter by last name, but it can be easily extended to do other filtering or even change the sort order. You would start off by calling the procedure with the RecordCount to get your first page by the starting and ending row numbers. You will get back the result set, with the total number of records as the first column. That information you would store as a session variable or in the viewstate. Now that you know the number of records, you can then get any arbitrary set of those records by calling the procedure again, but with different starting and ending row numbers.
EXECUTE QueryByRow '%', 1, 25
Count RecordID LastName FirstName
----------- ----------- -------------------- --------------------
494 1 Able John
494 36 Baker John
494 69 Charles John
494 6 Dexter John
If you were showing 25 records per page, and you wanted to display page 3, you would do this:
EXECUTE QueryByRow '%', 0, 51, 75
That would return something like this:
Count RecordID LastName FirstName
----------- ----------- -------------------- --------------------
494 2 Marro John
494 46 Martinez John
494 79 Mitchell John
494 16 Schwede John
This would bring back just the 25 rows that you would need. There is, of course, no free lunch. The drawback is that you are executing the same query against the data every time you call this procedure. You have to weigh the performance of that versus the performance of retrieving a full set of data from the SQL Server to the web server and having the web server filter and persist the data. Depending on your data and the load on your server, SQL Server will have the result set cached in memory and each additional call the procedure will be running against data already in memory.
The use of a table variable allows to build the set in memory and create the row numbers. I have seen other examples that used temporary tables and self joins on the table, but table variables seem to place the least demand on the sever resources. This method will work with both SQL Server 2000 and 2005 and is not dependant on any version of ASP.Net.
The other drawback is that other processes could be editing the table between calls to the procedure. This is why the "Count" field is returned with each row. Should that value change between calls to the procedure then you know the data was edited in some way and you would have to make sure that your code could display a different number of records than it expected.