Sharovatov’s Weblog


Posted in web-development by sharovatov on 2 June 2009

Recently I had a short discussion in twitter with my web standards guru Molly Holzschlag about XHTML serialisation of HTML5. I was wondering what was the use case for XHTML when HTML serialisation provides everything you just might need. And then she talked to Steven Pemberton, W3C Chair of HTML and Forms Working Groups, results of which conversation Molly publishes here.

Firstly I thought I’d comment to her blog post, but then decided that my readers would be interested in this as well.

I’d like to reply to each statement Steven does.

So, parsing. Everyone has grown up with HTML’s lax parsing and got used to it. It is meant to be user friendly. “Grandma’s mark-up” is what I call it in talks.

I completely agree with the statement that everyone has grown up with HTML’s lax parsing. This implies that making people to switch to other approach (strict parsing), even if it’s for the good, will become quite a significant task.

And it will be impossible to change all the existing pages that were prepared in HTML. This is a backwards compatibility issue of XHTML standard.

But there is an underlying problem that is often swept under the carpet: there is a sort of contract between you and the browser; you supply mark-up, it processes it. Now, if you get the mark-up wrong, it tries to second-guess what you really meant and fixes it up. But then the contract is not fully honoured. If the page doesn’t work properly, it is your fault, but you may not know it (especially if you are grandma) and since different browsers fix up in different ways you are forced to try it in every browser to make sure it works properly everywhere. In other words, interoperability gets forced back to being the user’s responsibility.

Well, the main reason why you have to test your pages in different browsers is CSS, not the mark-up. And if you use a validator to check your HTML, you will get absolutely same HTML parser behaviour in all popular browsers. So you as an author can make sure your webpage doesn’t have any errors, while even the worst malformed old-school tag soup is parsed really similarly (if not exactly the same) by all popular browsers. This is what real backwards-compatibility is.

Now, if HTML had never had a lax parser, but had always been strict, there wouldn’t be an incorrect (syntax-wise) HTML page in the planet, because everyone uses a ’suck it and see’ approach:

  1. Write your page
  2. Look at it in the browser, if there is a problem, fix it, and look again.
  3. Is it ok? Then I’m done

and thus keeps iterating their page until it (looks) right. If that iteration also included getting the syntax right, no one would have complained.

Well, I can speculate on this as well — if HTML didn’t have lax parsers, it wouldn’t be popular at all and something else would’ve been invented. Or it would be used on two pages in the whole WWW that nobody knows about – as XHTML2 is now :) I really think that the low entry cost to publishing a page to web played a very important role in WWW development.

So the designers said for XML “Let us not make that mistake a second time” and if everyone had stuck to the agreement, it would have worked out fine. But in the web world, as soon as one player doesn’t honour the agreement, you get an arms race, and everyone starts being lax again. So the chance was lost.

So I am a moderate supporter of strict parsing … I want the browsers to tell me when my pages are wrong, and to fix up other people’s wrong pages, which I have no control over, so I can still see them.

So he admits a usability issue that draconian error handling in XHTML has and agrees that fixing issues for malformed pages is a good thing. Well, right, nobody will look at a technology that doesn’t provide backwards compatibility. Stick to HTML, Steven — you’ll be able to use validators to check if your code is formed properly and all HTML parsers will fix any errors you, your Grandma, your readers commenting a blog entry (or even your CMS!) will accidentally leave.

There is one other thing on parsing. The world isn’t only browsers. XML parsing is really easy. It is rather trivial to write an XML parser. HTML parsing is less easy because of all the junk HTML out there that you have to deal with, so that if you are going to write a tool to do something with HTML, you have to go to a lot of work to get it right (as I saw from a research project I watched some people struggling with).

What else apart from browsers do we have, Steven? We’ve got Opera (which handles HTML really fine) even on smallest and slowest mobile phones. Every single search engine is able to parse, extract, categorise and rate information from really wildly-crafted HTML pages. If not browsers and search engines, what else?

Look at RSS. Why does it use XML? Only because it was invented, not evolved. There was no backwards compatibility problem and that’s exactly why XML (being a great cross platform structured data storing and transport language) was chosen.

SVG, MathML, SMIL, XForms etc are the results of that distributed design, and if anyone else has a niche that they need a mark-up language for, they are free to do it. It is a truly open process, and there are simple, open, well-defined ways that they can integrate their mark-up in the existing mark-ups.

Absolutely right. All these wonderful extensions are going to be supported in HTML5 HTML serialisation (and look at IE – it’s been supporting data islands, TIME and other stuff for ages!).

So in brief, XHTML is needed because 1) XML pipelines produce it; 2) there really are people taking advantage of the XML architecture.

  1. As you say it’s perfectly acceptable to serve XHTML as text/html, it’s parsed as HTML. So if your pipeline generates data that’s acceptable by HTML parsers, why use XHTML?
  2. As SVG, MathML, SMIL and XForms are going to be supported in HTML serialisation, why use XHTML?

I’d like to say thank you to Steven and Molly for spending their time, but I still don’t see a single use case for XHTML. If I’m missing anything, I will be happy to fill the gap in my knowledge.

Tagged with: ,

One Response

Subscribe to comments with RSS.

  1. Kurt Cagle said, on 3 June 2009 at 3:35 am

    A couple of comments – having rehashed almost an identical argument several times before.

    First, about Grandma’s HTML. Grandma doesn’t write HTML, and hasn’t written HTML in more than ten years. If Grandma is going to develop web content at all in this day and age, it will almost certainly be through a WYSIWYG AJAX control running within a web CMS somewhere.

    What this means in practice is that 99.9995% of all of the HTML that is produced in this day and age comes through some automated process – anywhere from the aforemention WYSIWYG control to XML pipelines to people writing ASP.NET or PHP code. This means ultimately that Grandma in this instance, are lazy tool developers who frankly SHOULD know better – after all, if they forget a semi-colon in their PHP code, the damn stuff won’t run – why should it be any different for developing declarative code?

    Second, web browsers may consume HTML, but they are not the only consumers of XML. If I transport HTML in a SOAP package or an RSS feed, I have to escape it on one end and revivify it on the other in order to work with that HTML. It means that your code applications have to work harder, and it means millions of dollars more invested into writing an extra layer of quasi-SGML into what are otherwise low memory parsers for every device that hosts some variation of a browser.

    Moreover, SVG has the capability of embedding external XML content within it (a feature that Firefox supports to very good effect by the way) including XHTML code. Chances are pretty good that if the content involves XHTML, it will be SVG renderer, not the HTML renderer, that will be responsible for this, and non-XML HTML will break that SVG because the parser is expecting XML content – unless of course, you want to render the SVG content as ill-formed SGML as well. Ditto for XForms.

    I know there’s a fairly sizeable group within the W3C that disagree with this position, almost all of them on the HTML side, but overall, this notion of carrying forward the HTML/XHTML split is going to haunt us as the world moves towards a more services oriented architecture.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: