Sharovatov’s Weblog

Beanstalkapp, FogBugz and now Case Tracker

Posted in web-development by sharovatov on 27 November 2009

As a follow-up to my post about free hosted integrated solution for bugtracking and version-control, I’d like to introduce a great tool I accidentally found and then installed – Case Tracker.

It’s a free desktop application which allows you to easily view a list of current bugs, gives you a way to “start working” on the bug – so the time you actually spend on fixing the bug or implementing the feature is carefully calculated. So it’s basically a time-tracking application for a fogbugz – it takes the list of bugs

image

To start working, you need to enter your fogbugz username, password and the URL where you have it installed:

 image

Case Tracker supports both FogBugz On-demand and hosted versions – it simulates all required POST requests as if you’re working with Fogbugz through your browser. As soon as you entered correct username and password, it will show you the list of active bugs:

image

However, by default it shows bugs assigned to anybody, which may be not always desirable. To address this issue, Case Tracker provides a search filter (funnel on the right-hand side of the pause button). For example, I need only those bugs that are assigned to me, so I add assignedto:"Vitaly Sharovatov" as a search filter:

image

Then I press “Go” and get my list populated with only those bugs that I need! Awesome!

For more detailed instructions on the allowed syntax read this.

What’s really great in Case Tracker – it can automatically stop measuring time when you’re away from keyboard for a certain period of time:

image

However, Case Tracker doesn’t allow creating new cases – it opens your fogbugz URL in your default browser so you can enter new case there. But Case Tracker is not a replacement for Fogbugz UI – it’s goal is to simplify time tracking.

So the general flow is:

  1. you choose a bug from a drop-down list
  2. if the estimate hasn’t been set for this bug, Case Tracker prompts you to enter the estimate (using the same syntax rules as in Fogbugz in the browser)
  3. the work is started and time is measured – if you’re afk or just press pause button, it stops measuring the time
  4. when you’re finished – you commit the bug and mark it resolved either by specifying status:resolved in svn comments or using Case Tracker – I prefer to specify it in a submit comment – just got used to it.

So if you use fogbugz – this tool is definitely worth trying!


Share :

Browsers’ developer tools evolution

Posted in browsers, javascript, web-development by sharovatov on 19 November 2009

It’s great to see that better tools for developer start to appear.

As in many other cases, the race started IE5.01 with support for script debugging in an external Script Debugger app. And now the race takes us to the new level with awesome tools built into browsers (like Firebug in Fx or Devtools in IE8) or even better external – let’s welcome dynaTrace Ajax!

dynaTrace Ajax supports IE6, IE7 and IE8, and will soon support Firefox. It’s basically the best tool out there for profiling and debugging javascript and CSS. Here’s what John Resig, creator of JQuery library says about the tool:

I’m very impressed with dynaTrace AJAX Edition’s ability to get at the underlying “magic” that happens inside a browser: page rendering, DOM method execution, browser events, and page layout calculation. Much of this information is completely hidden from developers and I’ve never seen it so easily collected into a single tool. Huge kudos to dynaTrace for revealing this information and especially so for making it happen in Internet Explorer.

And here’s Steve Souders, web perfomance guru, says:

When it comes to analyzing your JavaScript code to find what’s causing performance issues, dynaTrace Ajax Edition has the information to pinpoint the high-level area all the way down to the actual line of code that needs to be improved. I recommend you give it a test run and add it to your performance tool kit.

Must-have for any web-developer, seriously.

It’s interesting to see that Google and Apple play a good catch-up – both Chromium 4 and Apple Safari teams invest significant resources in building devtools, Chromium 4 finally has its own CPU & heap profilers now on top of V8. So bearing in mind that Firefox profiling will be supported by dynaTrace Ajax, it’s only Opera that’s left behind the game at the moment.

Come on, Opera team!

P.S. and by the way, Opera, can we get inPrivate browsing mode please?


Share :

Tagged with:

Deleting flash plugin (flash.ocx)

Posted in no category by sharovatov on 9 November 2009

Our great system administrator amongst other sysadmin-specific posts published a really interesting post about deleting the flash plugin.

Проблема заключается в том, что хитрый установщик Flash, при установке дополнительно выставляет в ACL файлов информацию о запрете на запись (write) данных файлов для всех пользователей. Данное правило перекрывает все остальные права и не даёт удалить файлы в операционных системах считающихся с правами доступа NTFS. То есть для удаления достаточно зайти в свойства файла, на вкладке «Безопасность» (Security) нажать кнопку «Дополнительно» (Advanced) и удалить две строки описывающих запрет (Deny) на запись. После этого файлы удаляются без проблем.

For those who can’t read in Russian, here’s the essence excerpt:

When you try to delete flash plugin (flash6.ocx, flash10c.ocx) from %windir%\system32\Macromed\Flash folder, you get “permission denied” even if you’re the owner of the directory. The reason is that Flash plugin installer sets DENY WRITE permissions in NTFS ACL for this file, and DENY permissions rules always override ALLOW rules. So when you try to delete even you’re the owner of the files, you’re denied to do that :)

To fix this and delete the file, first run regsvr32 /u <path_to_file> command to unregister the file (if it’s registered in the system). Then you have to open file properties, go to “Security” tab, click “Advanced” button and remove two “Deny” entries there. Then you won’t have any problems deleting the file.

Thanks for sharing this, Dmitry!


Share: 

Tagged with: ,

HTTP persistent connections, pipelining and chunked encoding

Posted in http by sharovatov on 5 November 2009

When I have free time, I like to reorganise the knowledge I’ve got and prepare mindmaps/cheatsheets/manuals of interesting stuff. And the formal approach I usually use forces me to organise data in a way so that it won’t take me long to grasp the idea if I forget something.

And I also like posting resulting resources to blog — that’s a good English techwriting skills practice plus some publicity for the knowledge ;)

So this post is another one from the HTTP series and describes HTTP/1.1 persistent connections and chunked encoding.

HTTP/1.0 said that for every request to a server you have to open a TCP/IP connection, write a request to the socket and get the data back.

But pages on the internet became more complex and authors started including more and more resources on their pages (images, scripts, stylesheets, objects — everything that browsers had to download from the server). And for every resource request clients were opening separate connections, and it was taking time and CPU/memory resources to open a new connection, so from users prospective, resulting latency was becoming worse. Something could be done to improve the situation.

So HTTP IETF decided to implement a nice technique called “persistent connections”.

Persistent connections reduce network latency and CPU/memory usage of all the peers by allowing reuse of the already established TCP/IP connection for multiple requests.

As I mentioned, HTTP/1.0 client was closing the connection after each request. HTTP/1.1 introduced using one TCP/IP connection for multiple sequential requests, and both server and client can indicate that the connection has to be closed upon the completion of current request-response by specifying Connection: Close header.

Usually HTTP/1.1 client sends Connection: Close header with the last request in the queue to indicate that it won’t need anything else from the server, so that the TCP/IP connection can be safely closed after the request has been served with response. (Say, it wanted to download 10 images for the HTML page, it sends Connection: Close with the 10th image request and the server sends the last image and closes the connection after it’s done).

Persistent connections are the default for HTTP/1.1 clients and servers.

And even more interestingly, HTTP/1.1 introduced pipelining support – a concept where client can send multiple requests without waiting for each response to be sent back, and then server will have to send responses in the same order the requests came in.

Note: pipelining is not supported in IE/Safari/Chrome, disabled by default in Firefox, leaving Opera the only browser to support and have it enabled. I will cover this topic in one of the next posts.

In any case, if the connection was dropped, client will initiate new TCP/IP connection and those requests that didn’t get a response back will be resubmitted through the new connection.

But as one connection is used to send multiple requests and receive responses, how does the client know when it has to finish reading the first request?

Obviously, Content-Length header must be set for each response.

But what happens when the data is dynamic or the whole response’s content length can’t be determined by the time transmission starts?

In HTTP/1.0 everything’s easy — Content-Length header can just be left out, so the transmission starts, client starts reading the data it’s getting from the connection, then when the server finishes sending the data, it just closes the TCP/IP connection, so client can’t read from the socket any more and considers the transmission completed.

However, as I’ve said, in HTTP/1.1 each transaction has to have correct Content-Length header because client needs to know when each transmission is completed, so that the client can either start waiting for the next response (if requests were pipelined), or stop reading current response from the socket and send new request through this TCP/IP connection (if requests are sent in a normal sequential mode), or close the connection it if it was the last response he was to receive.

So as the connection is reused for multiple resources’ content transmission, the client needs to know exactly when each resource download is completed, i.e. it needs the exact number of bytes it has to read from the connection socket.

And it’s obvious that if Content-Length can not be determined before the transmission starts, the whole persistent connections concept is useless.

That is why HTTP/1.1 introduced chunked encoding concept.

The concept is quite simple — if exact Content-Length for the resource is unknown at the time when transmission starts, server may send resource content piece by piece (so-called chunks) and provide Content-Length for each chunk, plus sends an empty chunk with zero Content-Length at the end of the whole response to notify client that this response transmission is complete.

To let HTTP/1.1 conforming clients know that chunked response is coming, server sends special header — Transfer-Encoding: chunked.

Chunked encoding approach allows client to safely read the data — it knows the exact number of bytes that are to be read for each chunk and knows that if an empty chunk arrived, this resource transmission is completed.

It’s a little bit more complex than HTTP/1.0 scenario where server just closes the connection as soon as it’s finished, but truly worth it — persistent connections save server resources and reduce whole network latency, therefore improving overall user experience.

Links and resources:


Share :

PHP loadHTMLFile and a html file without DOCTYPE

Posted in php, web-development by sharovatov on 1 November 2009

Just noticed that when you parse an html file with DOMDocument’s method loadHTMLFile and there’s no DOCTYPE defined in your html, PHP will silently load an empty DOM document.

Just try saving the following in a test.html file:

<html><body><div id="toc">wtf</div></body></html>

And then run the following php code:

$doc = new DOMDocument();
if ($doc->loadHTMLFile('test.html')) {
  echo 'loadHTMLFile was successfully executed<br>';
  $toc = $doc->getElementById('toc');
  echo 'now trying to var_dump the $toc:<br>';
  var_dump($toc);
}

You’ll get NULL as a result of the var_dump call. As if getElementById couldn’t find the node.

Interesting?

Citing php.net,

The function parses the HTML document in the file named filename. Unlike loading XML, HTML does not have to be well-formed to load.

Does this imply that DOCTYPE may be omitted? I think so. But then the abovementioned code wouldn’t show NULL as a dump of $toc. Unfortunately, experiment shows that DOCTYPE is required, even a HTML5-ish
<!DOCTYPE html> will do the job. 

But why on earth doesn’t loadHTMLFile throw a warning or at least return false as it should according to the documentation? Nobody knows.

So if you notice that your DOM-based php script acts in a weird way, check if you have a DOCTYPE defined on the HTML document you’re trying to parse.

Hope this saves someone some time.

P.S. More bugs to come — if you have a HTML file saved in utf-8 codepage with BOM, loadHTMLFile will throw the following E_WARNING:

Warning: DOMDocument::loadHTMLFile() [function.DOMDocument-loadHTMLFile]: Misplaced DOCTYPE declaration in test-BOM.html, line: 1 in /home/test/www/test-DOMDocument.php on line 3

Remove the BOM and everything works fine. Apparently, loadHTMLFile doesn’t know that BOM usually indicates that the document is saved in UTF8/16/32. Weird.

P.P.S. Another issue. Try pointing loadHTMLFile to an HTML-document saved in UTF-8 with some international characters (Russian words, in my case). Then get a node with international characters and do echo $node->nodeValue. Are you getting corrupted symbols? I was. The whole project is in UTF-8, every single file is saved in UTF-8.

I added <meta http-equiv="Content-type" content="text/html;charset=utf-8" /> to the head section — characters started showing in a correct encoding, but the following WARNING appeared:

Warning: DOMDocument::loadHTMLFile() [function.DOMDocument-loadHTMLFile]: Input is not proper UTF-8, indicate encoding ! in /home/test/www/test-russian.html, line: 65 in /home/test/www/test-DOMDocument.php on line 29

And the only way to properly get rid of this warning is to add

<?xml version="1.0" encoding="UTF-8"?>

to the first line of your html document and it finally worked without any warnings or issues. Awesome. XML header must be used for loadHTMLFile to run properly. Way too buggy to use.


Share: 

Tagged with:
Follow

Get every new post delivered to your Inbox.