This is just a short pointer to an article written by James O. Coplien who happens to be a visiting professor at my old University. The article is entitled “Teaching OO: Putting the Object back into OOD“. Anyway the central premise of the article is that objects themselves should be viewed as the fundamental building block of object orientation rather than the class.
A short while ago I wrote an article about the web as an application suite. Jon Udell recently wrote an article about Interactive Microcontent, in this article he discusses working with fragments of a document. One of the examples he gives is of calendar data embedded into a document, his example has many similarities to an example I gave in my article. Clay Shirky’s semantic web essay andTim Bray’s semi-rebuttal of it has recently raised the profile of the semantic web topic again. I think that coming up with real and concrete applications for embedded meta data will help to provide the impetus the semantic web (in whatever form it takes) needs to become more than just an academic exercise.
I’ve been programming with Java for a few months in my new job now, my previous experience with C# helped me to pick it up quite quickly so there wasn’t too much of a learning curve for the stuff I’ve been working on. Now that I’ve entered the corporate world I recon I should start doing some corporate stuff like getting certified, so here’s my plan:
- Await delivery of Sun Certified Programmer & Developer for Java 2 Study Guide.
- E-mail a few people who I know have passed the certification and ask them for any tips.
- Study & read followed by some more studying & reading.
- Take the exam.
If any of you have any hints you want to pass on then get in touch.
This is just a quick update to let people know the site isn’t closed, I’m just really busy with my new job at the moment. Going into full time employment from a student lifestyle means you have a lot less free time available! One of the things I have been upto though is travelling to Utah from my brother-in-laws wedding, here are a couple of snaps from the trip.
I came across a set of notes written by Paul Hammond concerning the recent lecture given to the royal society by Tim Berners-Lee. One of the interesting point Paul brings up is the fact that despite living in a connected environment, the connections aren’t all that strong. Specifically most of the time the connection is the ubiquitous copy and paste scenario.
A goal of the semantic web is to provide both human readable documents as well as machine processable documents in the same package. For example it is quite easy for a human to recognise a date format (in one of the myriad ways it might be formatted), but for a computer to unambiguously recognise a date in a potential multitude of formats is more difficult. In his notes, Paul outlined what he did with an e-mail he received concerning the lecture:
When an e-mail came round the office about this evenings talk, the first thing I did was type the date into my calendar. I then looked up the address on a map site, and put a link into the appointment. I also added some quick notes about the subject of the talk. I then forwarded the info on to several friends who might be interested. They probably did the same thing. At every step, I had to manually cut and paste the information between applications, as did everyone else.
When I hear stories like this I am reminded of Microsofts attempts to introduce “Smart Tags”. Despite the obvious anti-microsoft feeling (which their track record may not have been unwarranted) smart tags have always seemed to me a good idea at some level. Replacing the smart tag concept with in document meta data provides a suitable platform for cross application communication. Imagine a suite of web enabled applications that were loosely coupled, now consider these applications registering an action for a specified metadata type, need an example of what the heck I’m talking about yet?
Imagine we start of with a small document fragment:
… lets meet up at around 10pm this tuesday …
Now using a sprinkling of automagic we get this fragment (the user of course only sees the original text):
… let’s meet up at around <date value=”2003-11-23 22:00:00″>10pm this tuesday</date> …
Using a bit of magic based on the date the message was written (another piece of meta data) and a few specified rules, the human readable date can be annotated with more machine processable information. By then taking the concept behind mime-types to a different level of granularity we can conceive of applications registering actions based on a specified metadata type. Given the previous sample a calendar application may associate an “add event on date” action with the date metadata. This action could then be exposed to the user in some manner (through a context menu for example).
Now for a bit of prior art:
Every one seems to be running CSS challenges these days. I launched mine a well over a year ago, Its a bit different from most of the others though and is designed to be a test of your table coding skills, If the truth be told I hadn’t thought much about it until I received an e-mail the other day (received in the process of relocation, hence the lack of updates to the site as my computer has been in a box). As with most e-mails It either praises my site or disses it, guess which one this comes under.
I was going to take up your challenge but then I looked at your example page in four different browsers, IE6, Moz1.4 Op7.2 and Firebird0.6 and it looks different in each. My question was going to be “Which version is the desired one” but the I thought better.
If you’re going to challenge table layouts then you should at least be sure that your page looks the same in the common browsers.
Not to mention that the content of the boxes on your example page spills out of its containing box when the window is narrowed.
You’re right, No one can make a table layout that misbehaves that way.
I challenge you to publish this e-mail.
Ephraim F. Moya
Hey challenge accepted, now accept mine! Seriously though now I am no longer a student but in full time employment I’m going to put up a prize and set a date for entry.
If you want a chance to win then get your entries in for the 27th of September. The prize is a valuable link in the sidebar on my front page (assuming it’s not a dodgy site), a couple of CD’s that I don’t listen to any more and any miscellaneous goodies I pull together at the last minute (a set of signed juggling balls perhaps, signed by me). I’ll ship it pretty much anywhere in the world so feel free to enter, if shipping costs get too much though the prize gets smaller! If you want to enter then create your design and send me a note letting me know where you’ve put it. If there are any problems then drop me a note and I’ll see what can be done. The winner will be announced on the 29th.
Oh yeah Ephraim, just one site for you to look at, KPMG
No one can make a table layout that misbehaves that way, but I can find plenty that misbehave in other ways.
Over on the Isolani site there is a good article discussing the accessibility problems of layout tables in HTML. As usual Iso takes quite a harsh line against the use of tables for layout, unlike many critics of table layouts however he clearly and concisely expresses the reasoning behind the points raised. While I agree with the opinions expressed in that article I should note that there are voices raised supporting the use of tables for both layouts and data.
So two dissenting opinions from people who champion accessible web design. Which side of the fence should I come down to as a web developer?
Well, the short answer is I believe in CSS for layouts and tables for data. However it should be noted that CSS is not always easy. Do not be fooled your standard templates might be ok but as you try to achieve more advanced effects you begin to run into browser incompatibilities. Does this mean we should give up on CSS and return to tables for layout? Some would say yes, I would say that when your up against a deadline and using a table to get a particular layout would take five minutes but could take a couple of hours to get right in CSS, you are tempted.
What does this teach? CSS can be hard. In my opinion if you want to be a professional in the web design game you have to learn some of the hard stuff now and again or you will sink into a pit of stagnant mediocrity. Learn it when your not under pressure, experiment and have a bit of passion about you chosen field.
This is just a quick entry to point out a couple of really cool tools for C# that I’ve added to my home brew development setup and have been using in my development work recently.
Design By Contract
Bertrand Meyer has written one of my all time favourite books on object orientation. The “design by contract” concept is a powerful one, however I don’t have time to do it justice, suffice to say I recommend the book if you can get hold of it. Kevin McFarlane is the guy behind a C# implementation of the concept, and I like it. Checking the comments made about Kevins article suggests that there might be an alternative method of accomplishing assertion checking using attributes.
Ant build tool
I came across Ant recently while developing a Tomcat based Java project. I was impressed with its flexibility and started out looking for a .Net version, little did I know that Ant already had a few .Net tasks hidden away, oh yes. Using a text editor to do my programming, rather than an IDE, means that having a good build tool comes in very useful.
NUnit unit-testing framework
NUnit finishes of this little trio by introducing a testing suite into the mix. This is what I use whenever I work with my home brew coding setup. It keeps me sane by making sure I know when I break something, if you haven’t got a testing system set up then you should really think about it.
This item will show some of the common methods of dealing with text that is too small to read in Internet Explorer. In addition to the common methods I will also demonstrate a technique I developed myself. Internet Explorer users have can have problems with font sizing that other browser users do not have. These are related to two root causes:
- Limit of five levels of text sizing.
- Some text cannot be resized. (Fonts specified using “
px” values in CSS)
The limited level of text zoom Internet Explorer offers can be compensated for using screen magnification utilities. The second problem can also be overcome using some of Internet Explorers accessibility options available from the tools menu. In recent versions of Internet Explorer going to the Tools menu and selecting the “Internet Options…” item will give you access to configuration options that can modify how Internet Explorer displays pages. Under the “General” tab there should be a button for accessibility options usually labelled “Accessibility…“. Selecting this button will open the screen for setting accessibility options.
The screen shot above shows the accessibility dialog, the important point to note here is the option to “Ignore font sizes specified on Web pages“. This option is the important one because many web pages are set up to use small text by default. There are some web sites where the size of text cannot be increased by Internet Explorer. By ignoring font sizes pages that try too set a really small font won’t be able to do so.
The uncommon solution
After briefly outlining two ways you can adjust Internet Explorers settings to better suit a persons preferred text size I’ll outline another method I have developed, however first of all I’ll identify a couple of weaknesses in the methods outlined above.
- Ignoring font sizes can break the layout of a page.
- Ignoring text sizes does not allow images containing text to be resized.
- A screen magnifier can be troublesome to start and use if you only need it occasionally.
I developed a couple of bookmarklet links for Internet Explorer that will resize a web page, including its graphics, to make the page more readable. This method has the advantage that it preserves the layout of the page and enlarges pictures as well as text. The two bookmarklets are my Zoom +25% bookmarklet that will zoom in on a page by 25% each time it is clicked, and my Zoom 100% bookmarklet that returns a page to its original size. These can be placed on the links bar so they are always available to click on and zoom in on a page. If you have Internet Explorer you can test if they work for you by clicking on the links in this page.
To add these bookmarklets to your link bar you need to first make sure the link bar is visible, go to “View“, “Toolbars” and check that “Links” is selected. Drag the links bar so it is big enough to put the links on (the toolbars may be stop you resizing them, if so you need to unlock them by removing the check next to “Lock the toolbars” in the toolbars menu). When the links bar is big enough for them just click on the two links (Zoom +25%, Zoom 100%), keep the mouse button held down, and drag them to the toolbar, then let go. The links should be visible on the toolbar as shown in the screen shot above.
These bookmarklets do not work with other browsers such as Opera or Mozilla, however these browsers have generally better text resizing than Internet Explorer to start with and so don’t require these workarounds.
Update: This guy seems to have come up with a similar idea to mine.
Refactoring is the process of changing a software system in such a way that it does not alter the external behaviour of the code yet improves its internal structure.
I believe that sometimes code needs to be refactored. For example, an improvement in the internal structure of a software system is sometimes necessary in order to improve its extensibility. All too often however refactoring is taken to mean ripping out the guts of the system and starting again. It is a common problem that programmers will try to make the move to cleaner code by throwing out the existing body of work and beginning from scratch.
To be effective refactoring needs to be understood for what it really is. There are a couple of keys to effective use of refactoring in a project:
- Refactor in small iterations. After each iteration the external behaviour should be the same.
- Check that the external behaviour remains the same! This is where you run the test suite. You do have one don’t you?
- Modifying or adding functionality is not refactoring! They are separate steps, keep them separate (using small refactoring steps reduces the temptation to merge these different processes)
Without rigorous testing of refactored code can you be sure that the code is functionally equivalent? If you are “refactoring” code and not testing it then you are potentially introducing bugs for no external benefit! I am reminded of a quote from Lou Montulli:
I laughed heartily as I got questions from one of my former employees about FTP code the he was rewriting. It had taken 3 years of tuning to get code that could read the 60 different types of FTP servers, those 5000 lines of code may have looked ugly, but at least they worked.
You cannot refactor code if you do not understand what its external behaviour is. When you perform refactoring in small steps with frequent testing then you will begin to get the benefits from it. Two key insights to refactoring are:
- It’s easier to rearrange the code correctly if you don’t simultaneously try to change its functionality.
- It’s easier to change functionality when you have clean (refactored) code.
I cover this subject today because of a discussion I was involved in concerning the CSS property
visibility and its value
collapse. In investigating the situation with respect to the rendering of these properties I discovered that it was rendered incorrectly in a variety of browsers. Investigating whether this had been brought up in bug tracking system of the mozilla project I came across bug number 77019. The problem had been identified, reading the comments on the bug I came across this beauty of a comment:
Oh Noooooo, we had the collapse code working in the tree for years and it was simply optimized away. Hmm may be optimized is not the correct word for it.
And here lies the moral of this tale, refactoring without understanding the existing code or without rigorous testing equals the introduction of both regressions and new errors into the software system.