Demo Camp 2.0 Today
Just a reminder that Demo Camp 2.0 happens tonight at the Radiant Core Inc. offices. Be sure check out the map to find your way. We've got over 60 people coming tonight so it will make for very cozy presentations. See you then!
Just a reminder that Demo Camp 2.0 happens tonight at the Radiant Core Inc. offices. Be sure check out the map to find your way. We've got over 60 people coming tonight so it will make for very cozy presentations. See you then!
Thanks to Albert Lai and Chris Sukornyk of Bubble Labs for opening the office and hosting a TorCamp DemoCamp session.
I think demos, no matter what stage the product is at, gets everyone recharged and excited to create.
Check out...
BubbleShare
Ambient Vector
Eh List?
R-Mail
Thanks to David Crow and all the wonderful people I met at TorCamp. It was a fantastic success and I look forward to meeting more people at the next event. Already Albert Lai of Bubble Labs is organizing a demo night and there is already talk about TorCamp in the Spring.
I was impressed with talks given by Reg Braithwaite, Patrick Dinnen of Wireless Toronto, Leila Boujnane of idee and my business partner Jay Goldman.
Special thanks to the sponsors and John and Geoff of Tehann + Lax for opening up their offices to us.
During my session on time management John Lax and several others in the room raised the issue of valuation of a job versus valuation of time. Jay helped to continue the discussion over on Google Groups.
It's snowing in Toronto today. Yesterday it started snowing for the first time this season.
In Toronto, Southern Ontario and perhaps most of Canada, we have adjusted our notion of when the seasons occur largely based on weather and holiday weekends. Officially, each season is supposed to have three months with the following start days (for the northern hemisphere, give or take a few days depending on your country)
Spring, March 21
Summer, June 21
Autumn, September 21
Winter, November 21
According to Wikipedia meteorologists go by the following
Spring: March, April, May
Summer: June, July, August
Autumn: September, October, November
Winter: December, January, February
Which is still fine as far as each season is concerned as they each own three months and they're all quite happy as they have an even share.
Not so here. We define our seasons as the following
Spring, April to start of May 2-4 weekend (Victoria day)
Summer, May 2-4 weekend to Labour Day weekend inclusively (approximately August 31)
Autumn, after Labour Day weekend (approximately September 1) to October 31
Winter, November 1 to March 31
So clearly Winter gets the lion's share with five months, Summer still hangs on with a bit extra totalling just over three months and Spring and Fall get shafted with only two months. Hell, Spring is only like a month and three weeks.
All because it snows in November, doesn't start to warm up until April and we love to mark our Summers with cottage weekends.
Thanks Fall 2005. It was nice knowing you.
So at least I'm not alone in my wait for Google Analytics data to roll in. Tim Bray has reported seeing the message "Your first reports will be ready within twelve hours" and Stephen O'Grady isn't able to get Google Analytics to recognize that he's enabled tracking on his site. It's now been almost 48 hours since I was able to activate my account and it recognized that I had started to send data it's way.
I would imagine that it's quite a big queue given the amount of attention the new service received. It's interesting to note that a recent post on Digg of Google Base being officially launched today caused my requests to the server to result in a 500 error. I wonder if Google's resources are now starting to be stretched a bit thin such that they can't quite seem to keep up with our voracious demand for these new products.
I'm curious to compare the data Analytics will spit out versus my local package. I'm not sure if you can gleam search engine data from this as spider's probably won't execute the Javascript code to make the call.
T-Minus 12 hours and holding!
Update: Stats! Presumably the numbers aren't matching my local stats due to the RSS feeds but it's quite disparate. I'll have to perform a calculation to see what the differences are between RSS requests and page requests.
Thank you to Charles Miller for his post on HTTP Conditional GET for RSS Hackers. My DIY RSS feed wasn't implementing conditional GET until now essentially tossing away valuable bandwidth. If you roll your own RSS feed I suggest you check out Charles' advice.
Chris Justus has been experimenting with his first MiteSite. It's a chat application using AJAX tools embedded in an iframe. The implementation from the user's perspective is incredibly simple. Take a small space on your site such as a sidebar or header and plug-in a small application.
Because it's JavaScript based it has an almost instant start up time compared with Flash or Java Applets. It has an instant shared space experience much like Colin Moock's home page, which previously displayed shared interaction among all the users currently on the site. I tried it out when Chris was online and it immediately made the site that much more personal.
Great work Chris!
David Crow is organizing TorCamp, the Toronto installment of BarCamp to be held the evening of November 25th and all day on the 26th.
I've recently remarked that there doesn't seem to be a great face to face web group in the city even though there is a large web development community in Toronto.
Thanks David, should be fun!
When I first learned how to program, object oriented tools didn'texist. Instead for years procedural programming was all I had. As a result my "object-oriented" code tends to smack of procedural code quite often and I have to re-factor to optimize for re-use, modularity and testing.
So it comes as no surprise that post procedural programmers like myself will sprinkle their Java code with poor usage patterns that hark back to earlier days. One example is the absence of proper exception throws. More specifically, using null as an acceptable error condition.
When coding in say C, the programmer will return a particular numerical error code from a function if something was amiss during execution. The calling function was then fit to deal with the error code as it saw fit. This worked but was problematic as the codes were only visible at runtime and only had context if there was a list to match these codes. Those of you who program in Microsoft ASP code and see meaningless error codes such as 502345 know what I'm talking about. Sure you can look it up, but what a freakin' pain.
One of my favourite features of Java is the Exception object. Exceptions are basically object based errors which methods are defined to throw. Utilizing exceptions are often misunderstood. Frequently programmers will simply throw the offending Exception up to the calling method. This isn't great from a data coupling point of view but the worst of these offending implementations is the null return.
The null return is prevalent even in the JDK. It is the absence of a return value from a method because something didn't go right. java.util.Hashtable's get method is an example of the null error. The get method JavaDocs specifies that the return value is "null if the key is not mapped to any value in this hashtable". In other words, if something goes wrong, we're not going to throw an exception but just return null.
Null is like the faceless exception. Something clearly didn't go right because we got null back, but unless we go and look up what this means we're not sure what happened. Changing the method to provide a compile time context makes it much easier to understand what happened and handle it based on context.
// Using null as a return errorObject item = hashtable.get(key);if(item == null){ // handle error}
// Using an Exception as a return errortry{ Object item = hashtable.get(key);}catch(ObjectNotFoundException ex){ // handle exception}
The amount of code makes very little difference but now we have context for the situation. Hashtable is fairly common so I'm sure you're used to seeing null come back, but if you're using an unfamiliar method and don't remember to check for null you're compile will proceed cleanly and eventually your program will spit up the dreaded NullPointerException.
Peter is experimenting with OS-X self expression. I thought I'd throw in a developer's desktop with all the designers.
The next time your client insists on pop-up windows and sneakymanouvers to get their customers to "stay" on their website remind them,
In the days before Google, search engines like Excite, Hotbot, and Altavista larded themselves up with content in a desperate effort to delay users beyond the two pages of a search activity - search box and results. The goal was "stickiness," discouraging people from leaving your domain. When Google launched, one reason it shocked the Web community was its focus on getting you to where you actually wanted to go. How could there be a successful business model in actively sending people away from your site?Seven years and a $75 billion market capitalization later, that question has obviously been answered. The other search engines attempted to control your behavior. Google recognized that users maintain control, and to win they had to become users' preferred choice. - How I Learned To Stop Worrying and Relinquish Control by Peter Merholz
Unlike a broadcast medium the price of popularity on the web does notscale very well. Popularity and the cost of bandwidth go hand in hand. A popular podcast by a small or independent content creator can have serious financial repercussions as word spreads and downloads increase. I'm sure anyone who has encountered the Slashdot effect will concur that being popular has hits drawbacks.
Bulging bandwidth costs is what BitTorrent was originally designed to alleviate. By distributing content across multiple dynamic servers bandwidth costs are shared by those who are consuming the content and a temporary community sprouts up for each new piece of content made public using the BitTorrent format.
At the moment, BitTorrent is on the slightly more advanced side of web browsing. In many cases the user must understand that they have to download a BitTorrent client in order to receive the content or in many cases help the producer with their bandwidth concerns. At the technical level, a BitTorrent file is essentially treated as a separate MIME type (application/x-bittorrent). This helps the web browser associate BitTorrent files with the BitTorrent client.
I would argue that BitTorrent, while technically an application, can also be viewed as a transport mechanism on top of HTTP. The HTTP 1.1 specification, section 14.3 provides for various types of encodings when delivering content. By providing an encoding type of "bittorrent" an HTTP client could indicate that it supports the BitTorrent natively and provide bandwidth savings seamlessly for the user much like the current gzip encoding provided in most web browsers.
Now, we can argue semantics and say that technically BitTorrent is not an encoding scheme. This may be technically true however without modifying the HTTP specification it's a nice way to embed BitTorrent such that the transfer is virtually seamless. The server transfers the necessary torrent information and the client procures the transfer as a standard BitTorrent download. But why not just have the client detect the application/x-bittorrent MIME type? Unfortunately there's no way for the server to know if the client can handle the BitTorrent MIME type. The Accept-Encoding header lets the client tell the server what it can support and the file is served up appropriately.
There was some support behind the idea of baked in BitTorent on the server side with mod_torrent but the project has been suspended due to lack of time. This echos my current situation in that I don't have the time at the moment to implement such a technique but I'm interested in what's possible.
I know that the podcasting community is grappling with bandwidth issues and that BitTorrent has been mentioned a few times. I'm curious what other's thoughts are.
Location based services on cell phones have always ranked high on thelist of killer application for portable devices. In the Java world this is embodied in the JSR-179 specification tabled by Nokia and released in September of 2003. That's great news for developers, except it's been almost two years since it's release and only a few phones I can find support it (from Nokia, Motorola and Siemens) and the Nokia ones are not even on the market yet.
So far I've only found the N91 on the Nokia UK site, the Nokia 6265 and 6265i are listed as having support for JSR-179 but I haven't yet found them on any Nokia website and the Motorola i605 (gps) and i830 (gps) only available through Nextel. I also found some Siemens phones from here (CX65, C65, S6V, S65). Russell Beattie mentioned the N91 in April pointing out among other things that it had WiFi, but didn't mention the location capabilities.
Problem number two might involve the carriers. I believe the carrier must support the passing of triangulation information if the phone isn't GPS based. It's in the carrier's best interest to enable this data but privacy advocates might not agree with me. The bottom line is of course that if you don't want to use location based applications, you don't have to. Jay Goldman also pointed out that 911 requirements may force a carrier to provide this information though that doesn't necessarily mean that the handset applications will have access to it.
Two years seems like enough time to integrate this JSR with a phone but perhaps the development timelines didn't allow for it. Furthermore, I don't see anyone marketing this to developers with any great amount of gusto. Location based applicatons are going to be a huge opportunity in the mobile software market, but it's a tipping point technology that requires a push from the carriers with a platform from the manufacturers.
In the mean time I'll start researching the capabilities on Rogers, Bell and Telus networks in Canada. Any information is appreciated.
I always cringe when a customer runs into a problem with their websiteor web application before I do. Because I'm not perfect, my software isn't either, but it's difficult to explain that to a customer when they're staring at a Tomcat stack trace.
JSP error pages are fairly useless for the average user in their default state. Generally the user is greeted with a cryptic number, error message and stack trace that is hardly useful in reaching their goal and they have no choice but to back up and try other options, hunt for the webmaster's email or most likely give up and leave. Software errors while avoidable are bound to occur. In your JSP application there are several things you can do to improve this situation.
First, set-up a custom error page for catching exceptions. Add the following to your web.xml file pointing to a JSP page to handle the error.
<error-page> <exception-type>java.lang.Exception</exception-type> <location>/error.jsp</location> </error-page>
It's best to base the error page off the same general design as all your other site pages with a graphic difference to indicate a problem has occured. In Foundation we use a simple stop sign symbol with an exclamation mark. It's important to explain to the user that an error has occurred and it doesn't hurt to add an apology.
It's then best to indicate that the maintainer of the website has been notified of the error and will look into it. Furthermore it's helpful to provide either an email link or a form to allow the user to contact the webmaster or customer support if they require immediate assistance or if they feel they can provide more details.
Now, how do we as the webmaster know the error occurred?
A JSP error page has access to the offending exception. It simply exists within the JSP page as the variable "exception". We can extract the error message and stack trace from the exception to begin to build a simple report detailing the problem that occurred.
StringBuffer sb = new StringBuffer(); // append error message sb.append(exception.getMessage()); sb.append("\n"); // append stack tracce StringWriter sw = new StringWriter(); PrintWriter pw = new PrintWriter(sw); exception.printStackTrace(pw); sb.append(sw.toString()); sb.append("\n");
Stack traces are great but receiving an error report with a stack trace doesn't do much to tell you what happened at the time the error occurred, so let's dig for some more data.
We would like to know what page the error occurred on. We can retrieve that from the request object.
// append request URL sb.append("Request URL:"); sb.append(request.getRequestURL()); sb.append("\n");
Knowing the page is great but page state can be helpful. We can then append all request parameters that accompanied the request.
// append parameters sb.append("Parameters:"); Enumeration enum = request.getParameterNames(); int numValues = 0; while(enum.hasMoreElements()){ numValues++; String name = (String)enum.nextElement(); String value = request.getParameter(name); sb.append(name); sb.append(":"); sb.append(value); sb.append("\n"); } if( numValues == 0 ){ sb.append("No parameters"); } sb.append("\n");
Finally it's helpful to see where the user was coming from by pulling the referrer URL from the request header. Since we're pulling request headers we might as well add them all in the report. Headers include cookie information and what type of browser the user has.
In addition to all this wonderful information you could also add session and page context information if you wished depending on what servlet features your web application utilizes.
Now, how do we notify ourselves when an error occurs? There are many different ways of reporting errors. I find Log4J the simplest. By configuring a simple log4j.properties file and placing it in my root classes folder I can catch all Log4J errors and have them emailed to me via the SMTPAppender.
log4j.appender.mailer=org.apache.log4j.net.SMTPAppenderlog4j.appender.mailer.Threshold=ERROR log4j.appender.mailer.BufferSize=10 log4j.appender.mailer.from=error@domain.com log4j.appender.mailer.SMTPHost=localhost log4j.appender.mailer.subject=Website Name Error log4j.appender.mailer.to=support@support-domain.com log4j.appender.mailer.layout=org.apache.log4j.PatternLayout log4j.appender.mailer.layout.ConversionPattern=%t %-5p %d{dd MMM yyyy HH:mm:ss,SSS} %c{2} - %m%n log4j.logger.com.package=ERROR, mailer
In the JSP file, log the error using the StringBuffer we created our report with above.
// create the logging category Category cat = Category.getInstance("com.package.jsp.error"); // log error cat.error(sb.toString());
Alternatively you could use something like the Jakarta Mail Tag Library to send an email.
The astute readers will note that without the localhost mail server running this plan isn't going to work. I will leave it as an exercise or a later article to devise a strategy for monitoring both client websites and their respective services such as databases and mail servers to ensure all necessary services are always running.
You'll be amazed at how much better your customers will feel when you call them ahead of time to let them know that you noticed a problem and are already taking care of it. Problems occur. But being proactive about as much as you can will go a long way to improving your customer relationship and peace of mind.
My company Radiant Core is looking for a Senior Graphic Designer. If you're in the Toronto area and like working with smaller companies doing interesting work on the web, check us out!
Most programmers design for programmers or most programmers program forthemselves. Call it laziness, call it self-serving, most programmers will develop as it suits them. This form of development is not always what suits the end user from a usability point of view. Most often the final product is a result of the least amount of programming or the simplest design pattern.
The Ajax name was first mentioned by Jesse James Garret from Adaptive Path in Ajax: A New Approach to Web Applications in which Jesse explained that Ajax is a new methodology to building web applications by leveraging dynamic browser display technologies including CSS, DOM and JavaScript, specifically the XMLHttpRequest object. These technologies have been with us for a while but the power of such technologies has remained under the radar until Google introduced Google Suggest and Google Maps recently.
Since then several Ajax posts have been made by bloggers either denouncing it for it's disruptive affect on well respected design patterns or brainstorming on techniques to harness it's power.
The Ajax concept is not a replacement for the traditional model of web browsing. Too often we have seen developers design for themselves with previous web technologies and as a result producing harmful interface reprocutions such as breaking the back button with frames and Flash because it suited their definition of an enhanced "user experience". Ajax can shine through enhancement of page components by minimizing on full data updates and screen redraws when most of the page remains consistent. It can also enhance the interface controls and add drag and drop capabilities. These additions are not so harsh as to confuse the user into thinking that the interface has changed pages. In both of the Google examples the enhancements clearing improve the application without the user being confused by the page state.
"But users shouldn't be using the web for applications" some say. Wrong. Users will use what they deem fit. Millions of people don't use web based email systems because there is no alternative. They use these services because they are compelling. A more compelling experience than the alternative desktop or downloadable option. Java applets had their time but incompatibilities and slow download and initialization times make it extremely frustrating and difficult for users to adopt and accept applets.
Testing and architecture can be modeled to a better interface. By enhancing existing tools we can provide a suitable development and testing environment while simultaneously providing an experience that the user wants. Let's not make our laziness hurt the user experience.
David Walendis searching for a place to store his Javadocs. One of the questions that most people ask is "Should I store my Javadocs in CVS?" My answer? Don't. CVS as Dave essentially concludes is for source code. It's power is in the fact that it tracks differences between versions of each file and can mark sets of files for release or branches.
All your Javadocs and class files should be constructed after you have checked out the source from CVS and have run a build either for the entire project or specifically for class files or Javadoc files.
If you need to make a release of your compiled code or your Javadocs available then make a proper release, either in unpacked format on a website or within a zipped package for you end user. Same goes for end release jar files and war files. Only check in binaries like jars if the code relies upon it at compilation time.
It's nice to have all your files in a central storage location but lets not cram everything in just because we can.
Sometimes when a JSP page errors out it doesn't redirect to thespecified error page but displays the error page inline. In that case the page buffer has most likely been filled and because data has already been sent to the browser and the server cannot send a redirect command.
In these cases increase your JSP page buffer size with the page directive's buffer attribute. The default is 8kb. Increase it to something that your page will most likely not fill.
<%@ page buffer="50kb" %>
Among other things, Ajax is a cleaner, a town near Toronto and a soccer team and now, Ajax is the new acronym coined by Adaptive Path to describe a rich web interface combining XHTML, CSS, DOM, XMLHttpRequest and JavaScript.
Ajax is a concept who's time has finally come to the mainstream. We now have browsers which support enough of the web standards to be able to support richer web interfaces.
Marc Logemann argues that rich web applications would be better architected as client side applications in Swing or C#. I think the use of client side applications has slowed as we shift intelligence and CPU cycles outward. Not every application is suited to live on the server side but building richer interfaces to web sites or web applications fills in the middle ground between client side and server side.
Round trip delays have always prevented web applications from rivaling client side applications. The ease of an instant on always updated application is the appeal in using a web application. By leveraging Ajax concepts we can provide the user with a much more responsive experience.
Adrian Spinei has provided the code to calculate a URL's Google Page Rank value using Java. Awesome!