The main benefit is to avoid unnecessary browser decode.
For browsers to paint an image onto screen, they would need to:
- Retrieve the encoded contents of the image file
- Decode the image from its original JPEG, GIF, PNG or WebP format to a bitmap in memory
- Paint it onto the screen
Performance issues arise when users scroll and resize. Decoding is particularly expensive. Yet when we scroll up and down the page, browser would attempt to retrieve the memory previously occupied by off-screen images (that is, the content outside of the current scroll region). This means that whenever an image reappears from the edge of the screen, the browser would have to go through the same expensive decoding process all over again. When we have a lot of images splashed over a long page, the browser would likely stutter on scroll.
What’s different about canvas: browser doesn’t recycle the decoded information inside of canvas. So by using canvas to render the image, we force the browser to keep the decoded information in its memory, thereby avoiding unnecessary heavy work.
But of course, if I’m targeting mobile devices, I’d switch back to image tags and let browsers do their job since memory is scarce.
I think this is a browser-specific tactic to deal with the conflict between decoding images and limited memory. I was talking about Chrome more specifically, since the process is visible in the Timeline dev tools.
Take this project as an example:
We used image tags in previous versions. The problem with this site is that there are a lot of parallax effects involved, therefore it’s especially important that nothing heavy is going on when a user scrolls.