One of the big stories that has been circulating recently is the legal wranglings between Google and SearchKing. In a reading some of the commentary on the case there were several aspects that interested me, especially peoples seeming willingness to turn search engines into regularised utility companies. This subject has already been covered elsewhere:
Google is so important to the web these days, that it probably ought to be a public utility. Regulatory interest from agencies such as the FTC is entirely appropriate, but we feel that the FTC addressed only the most blatant abuses among search engines. Google, which only recently began using sponsored links and ad boxes, was not even an object of concern to the Ralph Nader group, Commercial Alert, that complained to the FTC.
In my opinion however such regulation should not be imposed upon these companies, what is often unacknowledged is that many of these internet “giants” are not just used by US citizens, I live in the UK and I do not feel that restrictions should be placed on google by the US system that would adversely affect my search experience. In any case the following quote seems to echo many of the sentiments of my own views.
It’s possible to read this case as a case about media regulation. Maybe Google is a common carrier; in agreeing to rank pages and index the Internet, it has (implicitly) agreed to abide by a guarantee of equal and non-discriminiatory treatment. On this view, it would be immensely important whether Google devalued SearchKing specifically, or as part of a general algorithm tweak. A great deal may also hinge on whether you think that Google provides access to information or merely comments on it. SearchKing alleges the latter, and Google agrees, but maybe SearchKing should have brought its case by arguing that Google has become, in effect, a gatekeeper to Internet content. On that view, a low PageRank isn’t just an opinion, it’s also partly a factual statement that you don’t exist in answer to certain questions, on the basis that low search results are never seen. When was the last time you looked for results beyond 200 on a search request returning 20,000 pages?
These are very messy questions, but also very important ones. They’re also very unlikely to be addressed directly in the courtroom, in this case or in other cases. Existing law just comes down too squarely on Google’s side (I think) for courts to take these broader questions without mutilating our existing rules. Nor should they. Not everything should be settled in the courtroom, and the discussion about the proper role of search engines is one that needs to take place in the same place this case began, back before it was a lawsuit: out on the Internet, where people read and appreciate others’ thoughts, and then contribute their own by adding links. Among other things, Google is a device for determining the consensus of the Web; and it’s just not right to fix the process by which we determine consensus by any means other than honestly arriving at one.
Perhaps as the internet, and the information contained on it, becomes more important to us as a society the answers we think we already have will have to be re-evaluated.
There has been some discussion recently concerning content management and the role of HTML in that process. First of all Brian Donovan states that
you need to avoid poisoning your content with HTML, the points mase make quite a lot of sense in certain contexts. In fact while reading “The Content Management Bible” I came across some similar thinking. The basic proposition is that by keeping HTML out of the content you can reuse the content in many other areas. This is an interesting viewpoint and one I tend to generally agree with, the importance however is in the context.
What do I mean when I talk about all this context malarky? Well, as observed elsewhere its a lot of effort to organise your content using databases and content management systems, they can help to automate a lot of work but they are not a trivial enterprise, the context of a situation helps to determine the solution employed in that situation, a weblogger is not generally in need of a fully blown CMS whereas a news organisation is almost certainly in need of one.
The context I am working in, with respect to this weblog, is strictly smaller scale. Can XHTML be used as the CMS language? What benefits, for the small scale webite, does a CMS provide that cannot be provided using XHTML? The seperation of content from presentation, an often heard mantra among netheads, can be achieved using XHTML. How can this be done? well drop all the depreceated, presentational aspects of HTML and embrace CSS and strict DOCTYPES. Yaddayaddayadda…
Whats that dozed off? Lets get back on track then, I said I was talking benefits, not features. The benefit is that your content doesn’t have to change everytime you change your design, this isn’t always as simple as it sounds for large scale changes, but simple site wide changes can be made very easily. Proper use of the semantic elements of XHTML can help to tie together your site, you just need to know what the semantics are. [abbreviations] [definitions]
Thining about all this has reminded me of an experience at a company I worked for where we were changing over from Lotus applications, such as 123 and AmiPro, to the Microsoft Office suite. At the time I worked in accounts and I am sure you can imagine the amount of spreadsheets and documents that needed to be converted within the department, if you can’t the number was around 20,000. I was given the job. What it taught me was that if you need to make the wholesale conversion from one format to another then use good tools to do it. How does this relate to website design? Well content has been passed on from one generation of program to another for a while now, don’t worry too much about it but when you need to do it organise yourself and use good tools.
I’m not going to write much today [not that I write much anyway], I am still recovering from partying/dancing to 7am this morning in Madrid and then flying back home to good old England this afternoon. May you have a prosperous New Year!
Yes thats right it sucks. Why do I say this because I want to distribute copies of the music to anyone who can connect to a peer to peer network? Actually no, I’ve never had Kazaa, Gnutella, Morpheus or anything similar installed on my PC, ever. The CD in question is my recently purchased copy of Alicia Keys “songs in A minor”, with accompanying remixes. the main CD can be copied to my hard disk without a problem, the remix CD however is copyprotected and requires a windows PC to run some propietary software, so if your not on a windows PC… Anyway I’m running Windows XP on a 1.6GhZ Intel P4 so it didn’t bother me too much, apart from not being able to listen too it at the same time as my other (legally obtained) tracks on my hard disk. You may be wondering why I don’t just play it on a handy CD player, well the fact is my handy CD player is sitting in the drive of my PC.
Giving in would have been so easy to do. Thats right I would not have put up much of a fight to copy the tracks if the playback on my machine was of reasonable quality, the fact was though it wasn’t. To listen to the tracks without introducing my own remixing in the process actually touching the PC during playback wasn’t allowed, mainly due to the fact that the scroll bar of any active window seemed to at like my very own scratch deck. Obviously not a state of affairs I was happy with. My determination was set, copy the songs on to my hard disk at a reasonably good quality for trouble free playback, something that I feel is my right. So how to defeat the copy protection? Cue the first clue
3.0 build 12a
This little snippet found in a version.txt on the CD indicated that this was protected using Cactus Data Shield copy protection. The brochure for the copy protection scheme used can found on the website of the authoring company. Overall the technology looks quite interesting and seems to work quite well from all accounts. Anyway thats enough of my moaning, have a Good Christmas!
I’ve been busy working on my fourth year project, it is a proof of concept device based around some of the Philips Nexperia technolgy, As part of the process I wrote a small report examining Pie Menus and comparing them with traditional linear menus, listing both the advantages and disadvantages of Pie Menus. The report focuses on the use of Pie Menus as part of the interface to a digital set top box. Pie Menus are something I’ve been interested in for a while so getting the chance to implement them in a proper application is something I am looking forward too.
Real life has been getting a bit hectic with university reports to be written and the like. In the meantime I’ll post a cool quote from the small initiatives newsletter, SIDL.
When they say: “Why doesn’t your site look right on my (choose from the following relic platforms: Commodore VIC-20|Atari 800|Coleco Adam|Apple IIe) running (choose from the following relic clients: Spyglass Mosaic|Prodigy browser|Netscape 1.1N)?”
I wish I could say: Perception is reality. How do you know it doesn’t look exactly the way we wanted, just for you, while the rest of the world is the “graceful degrade”?
Browsing some of my old bookmarks I went from here and then on to here. Turns out today is Bill Gates birthday.
After using mouse gestures for a while and not finding them as well integrated as the implementation found in Opera I decided to try a different approach. The new pie menu add-in I installed seems to be somewhat more intuitive than the mouse gestures, time will tell if it is something I will persevere with or discard like mozilla mouse gestures. The problem I had with the mouse gestures was the interference with text selection, in the end it just frustrated me too much.
So you want to develop and test ASP.net web applications on a Windows XP Home machine. What do you do?
- Get WebMatrix [if you don’t have it already]
- Use WebMatrix for a bit until you despair at the crap HTML it turns out.
- Long for the utility of your favourite Text Editor [TextPad]
- Despair at having to boot up WebMatrix first just to set the WebMatrix WebServer running from the desired directory and port at the start of every session.
- Figure out how to improve the situation
Yes I have figured it out [pretty basic really].
- Find the WebServer.exe in the WebMatrix program folder
- Create a shortcut to the exe
- Add parameters for the directory and port number from which you wish to run the server
- Copy the Shortcut to the “Startup” folder
- Publish it on your personal website
- "C:Program FilesMicrosoft ASP.NET Web Matrixv0.5.464WebServer.exe" /port:8080 /path:"C:WebRoot" /vpath="/"
I managed to compile my first non trivial (read “hello World”) C# application today. The lucky application was the opensource news reader Aggie. So how did I do it? Quite simply if truth be told.
Steps to compiling Aggie (or just download thefunctioning application)
- Get the source
- Put it somewhere
- Delete or move AggieCmd.cs (The command line source version)
- Run the compiler, C:>csc /out:aggie.exe /target:winexe *.cs
As a first taste in compilation not too bad (after a few false starts caused by not having compiled C sharp before).
Why not just download the thing? Because I would have missed out on all the hacking I’m going to do with it. Successfully compiling from source was just the first step on my road to C# mastery. One of the main false starts was that I wasn’t sure which files needed to be compiled, including the AggieCmd.cs file caused namespace conflicts when running the compilation (I imagine because it is a substitute for Aggie.cs) anyway a quick look at using the micrososft disassembly tool, IL DASM, showed me what was in both of the files and I quickly realised how the different source files fitted together.