Sharovatov’s Weblog

Windows Phone 7 Internet Explorer IEMobile 7.0

Posted in browsers, IE7, IE8, web-development by sharovatov on 15 March 2010

I’ve just watched MIX keynotes and as soon as MSFT announced there was free VS2010 Express for Windows Phone 7 with proper emulator, I downloaded, installed it, created sample app and ran the debugger.

Here’s some screenshots:

VS2010 Express for Windows Phone IDE:

Untitled-2

Windows Phone 7 Emulator running in the debug mode:

Untitled-1

The most interesting thing for me was to find out which IE version Microsoft decided to ship with Windows Phone 7. They said it wouldn’t be IE9, but would be something between IE7 and IE8. They also assured that the Windows Phone emulator (which comes bundled into the VS2010 Express for Windows Phone 7) is a proper virtual machine, a real copy of Windows Phone OS sandboxed in the VM engine.

So bearing this in mind I thought that I’d test the WP7 IE in emulator.

And here’s interesting stuff:

  1. navigator.appVersion on Windows Phone 7 IE returns
    4.0 (compatible; MSIE 7.0; Windows Phone OS 7.0; Trident/3.1; IEMobile/7.0
  2. User-Agent string is
    Mozilla/4.0 (compatible; MSIE 7.0; Windows Phone Os 7.0; Trident/3.1; IEMobile/7.0
  3. @_jscript_version reports 5.8 (as IE8 does)
  4. [if IE 7] conditional comments section gets applied
  5. *+html selector { rules } hack works

So at this moment IEMobile/7.0 seems to be a slightly adjusted Trident (layout engine) of IE7 and jscript of version 5.8. (as it turns out below, some features are either disabled or not accessible now, or will not be supported at all)

To dive deeper into the details, I’ve tested several things and prepared the following table:

Feature Supports
native XMLHttpRequest Yes
XDomainRequest No
Selectors API Yes
clipboardData No
data URI No
maxConnectionsPerServer No
sessionStorage/localStorage No
offscreenBuffering No
native JSON No
DOM Objects prototypes No
getters/setters No

 

So from the CSS prospective Windows Phone IEMobile 7.0 is indistinguishable from desktop IE7, it applies the same conditional comments rules, supports the same subset of CSS selectors, same hacks work and same bugs are there. If you know how to support desktop IE7 Trident, you won’t have a problem with IEMobile 7.0.

Completely different situation is with javascript – jscript version is 5.8 (same that IE8 has), but many features that IE8 supports via COM wrappers do not exist on IEMobile 7.0 (does it have COM at all?). Also it lacks support for some native features of jscript 5.8 (e.g. native JSON, DOM objs prototypes, getters/settters). The only feature I can see from IE8 is Selectors API support, which is great, but really – is that what we really expected?

So from what I see now, Microsoft took IE7 Trident (3.1), took jscript 5.8 and cut off as much as possible (all COM wrappers and some native features), put IE8 icon on top and shipped it to Windows Phone 7.

I really hoped it would have IE9 or at least IE8.

The only hope is that it’s still beta and all the IE8 stuff will be shipped with the final version. Hope it’s not just a wishful thinking.

It’s such a frustration to see a beautiful and free VS2010 Express for Windows Phone 7, awesome SL4 which runs everywhere, and then look at this crippled “IE7.234”. If this is a marketing choice to make web-developers write in Silverlight, then it’s silly because it breaks the most important feature Microsoft provided – backwards compatibility. Old sites and current sites which Windows Phone users will want to visit will break in this browser. Gmail works in html-only mode. Surely, some will adapt. But not all.

And by the way, Apple got it right on iPhone.

P.S. Todd Brix said there will be a Windows Update-like service in Windows Phone 7, so let’s hope that IEMobile will get updated.

Share :

Tagged with: ,

critical IIS vulnerability

Posted in security, web-development by sharovatov on 29 December 2009

Just got a link from our system administratorhttp://securityvulns.ru/Wdocument993.html 

Go read the vulnerability description now!

Basically – if your users upload files to your site and THEY specify file names, you’re vulnerable:

#Vulnerability/Risk Description:
– IIS can execute any extension as an Active Server Page or any other executable extension.
For instance “malicious.asp;.jpg” is executed as an ASP file on the server. Many file
uploaders protect the system by checking only the last section of the filename as its
extension. And by using this vulnerability, an attacker can bypass this protection and upload a dangerous executable file on the server.

There’s an unchecked patch for this vulnerability, but again this shows that you just can’t allow any user input saved to your system without filtering.

So, if you allow file uploads – your script have to specify filenames, not users.

Share :

Pomodoro Windows 7 gadget

Posted in widgets, windows 7 by sharovatov on 3 December 2009

I was really inspired by http://tomatoi.st/ web service which provides an easy to use web interface for Pomodoro time management technique.

But unfortunately, tomatoi.st is down due to overload too often, so I spent 20 minutes and prepared a simple Windows 7 pomodoro gadget. It does just what’s needed – showing timers:

image

Click on “Work” button to start 25 minutes work interval, “short br” – to get 5 minutes short break timeout, “long br” for a 15 minutes long break.

It’s dead easy to download and install – just click here. Or you can inspect the code if you want to – gadget is just a zip file with html, css and js inside.

Ah, and I have to warn you – when a period of time is over, it starts playing Alert.wav every second until you set a new period.

For more information about Windows 7 gadgets you read the following posts on my blog:

  1. introduction to the gadgets platform
  2. Exploring Windows Desktop Gadgets 
  3. Exploring Windows Desktop Gadgets #2 – security and limitations
  4. Exploring Windows Desktop Gadgets #3 – settings storage
  5. Exploring Windows Desktop Gadgets #4 – flyouts

Or read MSDN.

P.S. this gadget doesn’t have any settings or flyout or anything else – it’s very simple.

Share :

Tagged with:

Beanstalkapp, FogBugz and now Case Tracker

Posted in web-development by sharovatov on 27 November 2009

As a follow-up to my post about free hosted integrated solution for bugtracking and version-control, I’d like to introduce a great tool I accidentally found and then installed – Case Tracker.

It’s a free desktop application which allows you to easily view a list of current bugs, gives you a way to “start working” on the bug – so the time you actually spend on fixing the bug or implementing the feature is carefully calculated. So it’s basically a time-tracking application for a fogbugz – it takes the list of bugs

image

To start working, you need to enter your fogbugz username, password and the URL where you have it installed:

 image

Case Tracker supports both FogBugz On-demand and hosted versions – it simulates all required POST requests as if you’re working with Fogbugz through your browser. As soon as you entered correct username and password, it will show you the list of active bugs:

image

However, by default it shows bugs assigned to anybody, which may be not always desirable. To address this issue, Case Tracker provides a search filter (funnel on the right-hand side of the pause button). For example, I need only those bugs that are assigned to me, so I add assignedto:"Vitaly Sharovatov" as a search filter:

image

Then I press “Go” and get my list populated with only those bugs that I need! Awesome!

For more detailed instructions on the allowed syntax read this.

What’s really great in Case Tracker – it can automatically stop measuring time when you’re away from keyboard for a certain period of time:

image

However, Case Tracker doesn’t allow creating new cases – it opens your fogbugz URL in your default browser so you can enter new case there. But Case Tracker is not a replacement for Fogbugz UI – it’s goal is to simplify time tracking.

So the general flow is:

  1. you choose a bug from a drop-down list
  2. if the estimate hasn’t been set for this bug, Case Tracker prompts you to enter the estimate (using the same syntax rules as in Fogbugz in the browser)
  3. the work is started and time is measured – if you’re afk or just press pause button, it stops measuring the time
  4. when you’re finished – you commit the bug and mark it resolved either by specifying status:resolved in svn comments or using Case Tracker – I prefer to specify it in a submit comment – just got used to it.

So if you use fogbugz – this tool is definitely worth trying!

Share :

Browsers’ developer tools evolution

Posted in browsers, javascript, web-development by sharovatov on 19 November 2009

It’s great to see that better tools for developer start to appear.

As in many other cases, the race started IE5.01 with support for script debugging in an external Script Debugger app. And now the race takes us to the new level with awesome tools built into browsers (like Firebug in Fx or Devtools in IE8) or even better external – let’s welcome dynaTrace Ajax!

dynaTrace Ajax supports IE6, IE7 and IE8, and will soon support Firefox. It’s basically the best tool out there for profiling and debugging javascript and CSS. Here’s what John Resig, creator of JQuery library says about the tool:

I’m very impressed with dynaTrace AJAX Edition’s ability to get at the underlying “magic” that happens inside a browser: page rendering, DOM method execution, browser events, and page layout calculation. Much of this information is completely hidden from developers and I’ve never seen it so easily collected into a single tool. Huge kudos to dynaTrace for revealing this information and especially so for making it happen in Internet Explorer.

And here’s Steve Souders, web perfomance guru, says:

When it comes to analyzing your JavaScript code to find what’s causing performance issues, dynaTrace Ajax Edition has the information to pinpoint the high-level area all the way down to the actual line of code that needs to be improved. I recommend you give it a test run and add it to your performance tool kit.

Must-have for any web-developer, seriously.

It’s interesting to see that Google and Apple play a good catch-up – both Chromium 4 and Apple Safari teams invest significant resources in building devtools, Chromium 4 finally has its own CPU & heap profilers now on top of V8. So bearing in mind that Firefox profiling will be supported by dynaTrace Ajax, it’s only Opera that’s left behind the game at the moment.

Come on, Opera team!

P.S. and by the way, Opera, can we get inPrivate browsing mode please?

Share :

Tagged with:

Deleting flash plugin (flash.ocx)

Posted in no category by sharovatov on 9 November 2009

Our great system administrator amongst other sysadmin-specific posts published a really interesting post about deleting the flash plugin.

Проблема заключается в том, что хитрый установщик Flash, при установке дополнительно выставляет в ACL файлов информацию о запрете на запись (write) данных файлов для всех пользователей. Данное правило перекрывает все остальные права и не даёт удалить файлы в операционных системах считающихся с правами доступа NTFS. То есть для удаления достаточно зайти в свойства файла, на вкладке «Безопасность» (Security) нажать кнопку «Дополнительно» (Advanced) и удалить две строки описывающих запрет (Deny) на запись. После этого файлы удаляются без проблем.

For those who can’t read in Russian, here’s the essence excerpt:

When you try to delete flash plugin (flash6.ocx, flash10c.ocx) from %windir%\system32\Macromed\Flash folder, you get “permission denied” even if you’re the owner of the directory. The reason is that Flash plugin installer sets DENY WRITE permissions in NTFS ACL for this file, and DENY permissions rules always override ALLOW rules. So when you try to delete even you’re the owner of the files, you’re denied to do that :)

To fix this and delete the file, first run regsvr32 /u <path_to_file> command to unregister the file (if it’s registered in the system). Then you have to open file properties, go to “Security” tab, click “Advanced” button and remove two “Deny” entries there. Then you won’t have any problems deleting the file.

Thanks for sharing this, Dmitry!

Share: 

Tagged with: ,

HTTP persistent connections, pipelining and chunked encoding

Posted in http by sharovatov on 5 November 2009

When I have free time, I like to reorganise the knowledge I’ve got and prepare mindmaps/cheatsheets/manuals of interesting stuff. And the formal approach I usually use forces me to organise data in a way so that it won’t take me long to grasp the idea if I forget something.

And I also like posting resulting resources to blog — that’s a good English techwriting skills practice plus some publicity for the knowledge ;)

So this post is another one from the HTTP series and describes HTTP/1.1 persistent connections and chunked encoding.

HTTP/1.0 said that for every request to a server you have to open a TCP/IP connection, write a request to the socket and get the data back.

But pages on the internet became more complex and authors started including more and more resources on their pages (images, scripts, stylesheets, objects — everything that browsers had to download from the server). And for every resource request clients were opening separate connections, and it was taking time and CPU/memory resources to open a new connection, so from users prospective, resulting latency was becoming worse. Something could be done to improve the situation.

So HTTP IETF decided to implement a nice technique called “persistent connections”.

Persistent connections reduce network latency and CPU/memory usage of all the peers by allowing reuse of the already established TCP/IP connection for multiple requests.

As I mentioned, HTTP/1.0 client was closing the connection after each request. HTTP/1.1 introduced using one TCP/IP connection for multiple sequential requests, and both server and client can indicate that the connection has to be closed upon the completion of current request-response by specifying Connection: Close header.

Usually HTTP/1.1 client sends Connection: Close header with the last request in the queue to indicate that it won’t need anything else from the server, so that the TCP/IP connection can be safely closed after the request has been served with response. (Say, it wanted to download 10 images for the HTML page, it sends Connection: Close with the 10th image request and the server sends the last image and closes the connection after it’s done).

Persistent connections are the default for HTTP/1.1 clients and servers.

And even more interestingly, HTTP/1.1 introduced pipelining support – a concept where client can send multiple requests without waiting for each response to be sent back, and then server will have to send responses in the same order the requests came in.

Note: pipelining is not supported in IE/Safari/Chrome, disabled by default in Firefox, leaving Opera the only browser to support and have it enabled. I will cover this topic in one of the next posts.

In any case, if the connection was dropped, client will initiate new TCP/IP connection and those requests that didn’t get a response back will be resubmitted through the new connection.

But as one connection is used to send multiple requests and receive responses, how does the client know when it has to finish reading the first request?

Obviously, Content-Length header must be set for each response.

But what happens when the data is dynamic or the whole response’s content length can’t be determined by the time transmission starts?

In HTTP/1.0 everything’s easy — Content-Length header can just be left out, so the transmission starts, client starts reading the data it’s getting from the connection, then when the server finishes sending the data, it just closes the TCP/IP connection, so client can’t read from the socket any more and considers the transmission completed.

However, as I’ve said, in HTTP/1.1 each transaction has to have correct Content-Length header because client needs to know when each transmission is completed, so that the client can either start waiting for the next response (if requests were pipelined), or stop reading current response from the socket and send new request through this TCP/IP connection (if requests are sent in a normal sequential mode), or close the connection it if it was the last response he was to receive.

So as the connection is reused for multiple resources’ content transmission, the client needs to know exactly when each resource download is completed, i.e. it needs the exact number of bytes it has to read from the connection socket.

And it’s obvious that if Content-Length can not be determined before the transmission starts, the whole persistent connections concept is useless.

That is why HTTP/1.1 introduced chunked encoding concept.

The concept is quite simple — if exact Content-Length for the resource is unknown at the time when transmission starts, server may send resource content piece by piece (so-called chunks) and provide Content-Length for each chunk, plus sends an empty chunk with zero Content-Length at the end of the whole response to notify client that this response transmission is complete.

To let HTTP/1.1 conforming clients know that chunked response is coming, server sends special header — Transfer-Encoding: chunked.

Chunked encoding approach allows client to safely read the data — it knows the exact number of bytes that are to be read for each chunk and knows that if an empty chunk arrived, this resource transmission is completed.

It’s a little bit more complex than HTTP/1.0 scenario where server just closes the connection as soon as it’s finished, but truly worth it — persistent connections save server resources and reduce whole network latency, therefore improving overall user experience.

Links and resources:

Share :

PHP loadHTMLFile and a html file without DOCTYPE

Posted in php, web-development by sharovatov on 1 November 2009

Just noticed that when you parse an html file with DOMDocument’s method loadHTMLFile and there’s no DOCTYPE defined in your html, PHP will silently load an empty DOM document.

Just try saving the following in a test.html file:

<html><body><div id="toc">wtf</div></body></html>

And then run the following php code:

$doc = new DOMDocument();
if ($doc->loadHTMLFile('test.html')) {
  echo 'loadHTMLFile was successfully executed<br>';
  $toc = $doc->getElementById('toc');
  echo 'now trying to var_dump the $toc:<br>';
  var_dump($toc);
}

You’ll get NULL as a result of the var_dump call. As if getElementById couldn’t find the node.

Interesting?

Citing php.net,

The function parses the HTML document in the file named filename. Unlike loading XML, HTML does not have to be well-formed to load.

Does this imply that DOCTYPE may be omitted? I think so. But then the abovementioned code wouldn’t show NULL as a dump of $toc. Unfortunately, experiment shows that DOCTYPE is required, even a HTML5-ish
<!DOCTYPE html> will do the job. 

But why on earth doesn’t loadHTMLFile throw a warning or at least return false as it should according to the documentation? Nobody knows.

So if you notice that your DOM-based php script acts in a weird way, check if you have a DOCTYPE defined on the HTML document you’re trying to parse.

Hope this saves someone some time.

P.S. More bugs to come — if you have a HTML file saved in utf-8 codepage with BOM, loadHTMLFile will throw the following E_WARNING:

Warning: DOMDocument::loadHTMLFile() [function.DOMDocument-loadHTMLFile]: Misplaced DOCTYPE declaration in test-BOM.html, line: 1 in /home/test/www/test-DOMDocument.php on line 3

Remove the BOM and everything works fine. Apparently, loadHTMLFile doesn’t know that BOM usually indicates that the document is saved in UTF8/16/32. Weird.

P.P.S. Another issue. Try pointing loadHTMLFile to an HTML-document saved in UTF-8 with some international characters (Russian words, in my case). Then get a node with international characters and do echo $node->nodeValue. Are you getting corrupted symbols? I was. The whole project is in UTF-8, every single file is saved in UTF-8.

I added <meta http-equiv="Content-type" content="text/html;charset=utf-8" /> to the head section — characters started showing in a correct encoding, but the following WARNING appeared:

Warning: DOMDocument::loadHTMLFile() [function.DOMDocument-loadHTMLFile]: Input is not proper UTF-8, indicate encoding ! in /home/test/www/test-russian.html, line: 65 in /home/test/www/test-DOMDocument.php on line 29

And the only way to properly get rid of this warning is to add

<?xml version="1.0" encoding="UTF-8"?>

to the first line of your html document and it finally worked without any warnings or issues. Awesome. XML header must be used for loadHTMLFile to run properly. Way too buggy to use.

Share: 

Tagged with:

Twitter is now an officially accepted SEO tool

Posted in SEO by sharovatov on 22 October 2009

As Microsoft added live Twitter search results to BING search results page and Google promptly followed, Twitter becomes a very useful SEO tool which can bring additional traffic to your website. The main thing here is that the data is live – as far as I understand, search index is updated by Twitter directly, so the moment you tweet something, others will see it in the search results. Awesome!

Personally, I’m very afraid that Twitter will be seriously bloated with spam. And I don’t know yet how Microsoft and Google are going to filter out all the spam. Or maybe that’s something that Twitter will do? We’ll see.

In any way, Twitter will gain even more popularity and influence.

Tagged with: , ,

HTTPBis group is awesome!

Posted in Firefox, http, IE8, web-development by sharovatov on 21 October 2009

I’m finally back to blog. Finally started finding time between doing stuff at home, working at my great place of work and studying English :)

As you know, HTTP/1.1 spec said that conforming clients SHOULD NOT open more than 2 concurrent connections to one host. This was defined back in 1997 and at that time it seemed reasonable to have 2 simultaneous connections for a client, and noting that HTTP/1.1 introduced persistent connections concept, people thought that 2 simultaneously opened reusable TCP/IP connections would be enough for general use.

However, everything changes. Broadband internet came to mass market and people started thinking that better parallel download could benefit the whole website or a webapp perfomance. The history started with IE5.01, which was opening two connections by default, but there was a way to configure the number. So if you had a really good internet connection, you could make websites load significantly faster.

By the time IE8 development started, broadband connections became a standard for home internet, so IE8 started opening 6 connections (if the bandwidth allowed – on the dialup or behind a proxy it will still open 2). So IE8 engineers did a smart move and introduced the world with a browser that seemed to load sites faster.

Needless to say, Firefox 3 decided to change the value as well, so now Firefox 3 has 6 as a default value for network.http.max-persistent-connections-per-server configuration setting. Good for Mozilla for copying stuff from IE again!

And now HTTPBis team (Julian Reschke) commits the change which states that in the forthcoming HTTP standard the maximum amount of concurrent requests is not limited even with “SHOULD NOT” clause :)

Thanks HTTPBis team!