Google I/O 2013 – Instant Mobile Websites: Techniques and Best Practices


MONA VAJOLAHI: I’m
Mona Vajolahi. I’m a product manager
at Google. I work on making the web fast,
and more specifically, I work on [INAUDIBLE] products. DOANTAM PHAN: My name’s
Doantam Phan. I’m also a product manager
working on making the web fast. BRYAN MCQUADE: And I’m
Bryan McQuade. I’m the technical lead of
PageSpeed Insights. MONA VAJOLAHI: All right. So I want to start by showing
you a video that would capture the impact of the
recommendations that we are going to talk to you
about today. So here’s a sample
Wikipedia page. We are going to load it on a
mobile device at 3G network. And on the left hand side,
you’ll see the Wikipedia page– the original version. Then the right hand side, the
optimized vision is after the implemented best practices
that we are going to share with you. So let me play the video. So as you can see, the original
one takes some time to show something on the page,
and overall the original finishes in five seconds,
whereas the optimized version finishes in two seconds. Now, let’s have a closer look
at what’s happening here. So even more interesting is
that the optimized version actually shows something on the
page around the one second mark, whereas in the original
version, we are staring at a blank screen for almost
three seconds. And here’s what we are
going to do today. We are going to try to get
some content on the page around that one second mark. Let’s see just a quick
agenda on the top. So we’re talking about why speed
matters, and then we’re going to share the
recommendations for creating an instant mobile performance. And also, in the end, Bryan is
going to do a deep dive in one example, and show you in action
how we can do that. So we all know that connection
speed on mobile, 3G or 4G, is slower than your average
connection speed on desktop. However, users on a mobile
device actually expect the sites to load as fast or even
faster as what they have on the desktop. So there’s a problem. More than that, users of mobile
sites actually learn to avoid slow sites. And that first interaction that
they have with a site– the first experience they have–
is actually really, really important. In an experiment of adding one
second additional latency to a shopping site, it actually saw
that page views will decrease if you have additional latency,
conversion rates are going to drop, and bounce rates
are going to go up. More importantly, what happens
is that that experience sticks with the user, so they are less
likely to come back to the site, and less likely
to go back to the site. So here’s what we want
to do today. We want to make that first
experience really fast and snappy, so that we actually get
the users to come back and visit the site more often. So the topic of today’s talk–
we’re going to show how we can get the most important content
of the site to the user under one second. So that most important content
usually is what you see above the fold of the page. And why did we pick
one second? Because user studies show that
that is the limit that people are going to pay attention
to your site. So after that one second of
staring at a blank screen, users are more likely to just
step away and basically never come back to your site. And today, the average page load
time on mobile is seven seconds, which is a huge gap
from where we want to be compared to where
we are today. Now, we all know 4G bandwidth
is much higher than 3G, so that should solve all our
problems, and basically experience on mobile would
be very fast on 4G. Well, not quite. Let’s look at the difference
between bandwidth and latency. Bandwidth is the amount of
data transferred over the network per unit of time. So for example, a network can
have five megabit per second of bandwidth. However, latency is the delay
in transferring the packet from the source to
a destination. So for specifying latency, we
usually use round trip times. Now, if I have huge amounts of
data that I want to transfer on a network– like, if I’m
loading a video, obviously, bandwidth matters a lot. But when I’m loading a page,
there’s a lot of small requests going back and forth,
and in that case, bandwidth is not helping me a lot. So what happens is that latency
in loading your page is dominated by round
trip times. And now round trip times
on mobile networks are especially high. And on an average 3G network,
for example, you have a round trip between 100 milliseconds
to 450 milliseconds. On a 4G network, you
have between 60 milliseconds and 180. And as you can see, the
difference between 3G and 4G is not huge here. So in order to get that snappy
user experience, we have to design for this high latency
environment. And we have to try to reduce the
number of round trips as much as possible. And that brings us to the rules
that we are going to share with you today. These are about how to create
a fast user experience in a high latency environment. It could be when I’m using my
mobile phone to load a page. It could be when I’m using my
laptop and connecting to a 3G network to load a page. So the four rules are, one,
avoid landing page redirects. Two, minimize server
response time. Three, eliminate render
blocking resources. And four, prioritize
visible content. Now, what we are going to do
is that we’re going through each one of these rules, and
tell you why we picked those, and why they’re useful. So let’s just start by let’s see
what happens when a user visits a site. So I’ll go to my mobile device–
mobile browser. I enter www.example.com
in the browser. What happens is that there’s
going to be a DNS lookup to fetch the IP. Then there’s going to be a TCP
connection established. Then the request is sent
to the server. Server takes the request,
processes the response, generates the response, and
sends it back to the user. So we already have three
round trips, plus server processing time. Now, if I say, on an average
network, the round trip time is 200 milliseconds, that brings
us to 600 millisecond plus server response time. Now, let’s say www.example.com
actually has a redirect to m.example.com. What happens then? In that case, there is another
DNS lookup, another TCP connection, another sender
response, which basically doubles our latency. So there is three additional
round trips. If we are over SSL, it’s
actually four, and brings us to 1.2 second total latency. And this is all before
any of your HTML content gets to the browser. Now, if you look at this, there
are parts of this that I, as a developer, have
no control over. I cannot do anything about DNS
lookup, TCP connection, send and receive. But what I do have control
over is the redirect and server response time. And that brings us to the first
two rules that says, avoid any landing page redirects
and minimize server processing time as much as
possible to reduce that latency and to minimize
the RTT time. Now, Doantam is going to tell
you more about these next two rules, and why they are
really important. DOANTAM PHAN: Thanks, Mona. So we’ve seen that there are
various things that you can do to your network by adding extra
redirects, by having a high server response time, that
really slow down the user perceived latency
of your site. What I’m going to talk about now
is actually the way that the structure of your page– the HTML that you use and how
you organize it– can actually also lead to a huge increase
in user perceived latency. And the way that we’re going to
do this is we’re going to look a little bit into the
browser rendering pipeline. So what I have here is a
simplified diagram, and you can see that to paint anything
to the screen, we both need a document object model to be
ready, and the CSS object model to be ready. And it turns out that both of
these things are going to depend heavily on having the
presence of external scripts and stylesheets. To see how this actually
affects user perceived latency, we’ll go through
this brief example. So here I have a very simple
HTML web page. I’ve grayed out all the text,
because I want to indicate that the parser hasn’t actually gotten to the HTML yet. On the right hand side, I have
a representation of what the user sees– a smartphone– and at the bottom of the page,
I have some representation of the internal state of
the browser, right? So I have the pipeline, and I
also have the various external files that the HTML
references. So from our perspective, the
first interesting event is when we discover example.css. So at that point, you can see
the progress bars on the bottom are indicating that I’ve
started parsing the HTML, I’ve started constructing the
DOM, and I’ve initiated a fetch for the CSS. The next interesting event from
the perspective of user latency is when I encounter
this div– this div that presents some
text to the user. Now, ideally, at this point in
our rendering pipeline, we’d want to show this text to the
user immediately, right? Because the user has clicked
on our site, and they’re waiting, staring at a blank
page and a progress bar. But due to the way that this
div depends on the styling information inside external.css,
that’s going to cause the browser to not
know what to do. And so it’s just going to
continue parsing the file. And so this is where you can see
that the latency is going to creep in. So similarly, as I encounter the
image, as I encounter the JavaScript file, I’m also going
to have to initiate fetches for them. But I still can’t show anything,
because the CSS file hasn’t yet loaded. And so it’s only finally when
the CSS file is loaded off the network and memory that I can
actually pop something up on the screen to the user. At that point, the user
is finally engaged. Up to that point, they’re just
staring at a blank screen. And so an important thing to
note here is that the DOM can be constructed iteratively
as you’re parsing through the file. But the CSS object model is only
constructed once all the CSS is in place. And this is really the reason
why a lot of webpages feel slow on a mobile device, or
really, on any device. It’s just not as noticeable on a
desktop, due to the way that latency works. And so as we continue parsing
through the file, we’re going to finish loading
the JavaScript. We’re going to finish
loading the image. And at that point, the page
is ready, and sort of consumed by the user. But at that point, they’ve been
waiting quite some time to get this information. So to summarize, the issue is
that these external scripts and stylesheets are going to
block the painting of content in the body, right? And we’re not saying that
external resources are bad. In fact, it’s generally a very
reasonable practice on desktop to have these resources for
cacheability, and for easy composition of HTML. But on a mobile device, if you
assume 200 millisecond latency, maybe 300 millisecond,
these extra round trips to fetch every additional
resource is going to be very costly. And so what you really want to
do is be able to avoid these blocking external scripts
and stylesheets. So generally speaking, when you
do this, the way that you can get around this is you can
be smart about the CSS. You can inline parts of the CSS
that are responsible for the above the fold content in
the header of your HTML file. And so, then when the browser
is parsing through the file, at the point that it encounters
that div, it knows how to style it and paint
it to the screen. And that’s really where this
third rule comes from– this notion of eliminating render
blocking resources. Understanding that there’s
certain things in the rendering pipeline that will
block, because they don’t have the right information, and
making that information available to the browser
at the right moment. So let’s say that I attended
this talk, and I saw these three rules, and I think to
myself, hey, I’m done. I can just inline everything,
and everything will be great. And I just want to add an
additional caveat, which is where this fourth rule comes
into play, which is this notion of the additional latency
that comes due to the slow start of TCP. So in this example, I’ve
actually inlined all the CSS that I have. I’ve put all the styling
information there. In fact, I’ve gone the extra
step where I’ve added in the icons as data image URLs in
the header of the CSS. Now, the problem is going
to come up that– keep in mind that we want
to reduce these round trips, right? And so if the initial above the
fold content of your page is over 14K over that initial
TCP congestion window, that’s actually going to incur an
additional round trip. And so, you need to be
really careful about how you inline something. You can’t just blindly inline a
file, unless, of course, the file is below that cutoff. But if a file’s above that
cutoff, you’re going to need to figure out what are the
critical parts of CSS, and what are the noncritical
parts of CSS? And then you should use delay
loading and asynchronous stuff for the parts that are not
necessary for that initial user experience to really get
to something in one second. And I want to emphasize that
we’re not saying that you should only make your whole page
fit in 14 kilobytes or 15 kilobytes, because that is kind
of really stringent, and actually fairly difficult
to do. You only need to make sure
that the above the fold portion of your page fits. And actually keep in mind that
with compression, that’s actually going to cause us
to get a lot more space. Maybe up to 45K of text. And so that’s where this is
final rule comes from– this notion of prioritizing
the visible content. So be smart about what you’re
inlining, and make sure that it fits within this congestion
window so that the user can get that content right away. So now Brian is going to go into
an example about how you apply these rules
to a real site. BRYAN MCQUADE: Thanks,
Doantam. So I’m Bryan McQuaid. I’m the tech lead of
PageSpeed Insights. And I’m going to take what we
just learned from Mona and Doantam and apply that to an
actual website that we created to work through and see how much
faster we can make that website load on mobile. So we have this demo website, demo.modspdy.com, that we put together. It’s a simple mobile page. It sort of has characteristics
of standard mobile websites. It redirects to an m.site, it
has a little bit of server processing time, it’s reasonably
small, and it has just one stylesheet in the
head with some data URIs. The page looks like
this on the right. And it’s a simple page, right? We would expect a page like
this to load quickly. We would hope, anyway, right? It turns out– and I suppose I should
clarify. Modspdy.com was a domain we had available to create a demo. It has nothing to
do with SPDY. It just happened to
be on that domain. So now we’ll sort of dive in,
and we’ll look at this page, figure out where the performance
bottlenecks are, apply these optimizations one
at a time, and then observe the improvement in load time
that we get as result of applying those. So to start, we have this
unoptimized page here which literally just has
three resources. We’ve got demo.modspdy.com,
which redirects to m.modspdy.com, which
then loads a single static CSS resource. And what we see is that
unfortunately, the load of these resources is completely
serialized, and we essentially incur– because it’s on HTTPS
as well, we’ve got a fourth round trip in there– and the end result is that we’re
looking at about 6.6 seconds of latency before we
see anything on the screen, even for a simple
page like this. So let’s go ahead and look
a little further. So to start, we’ve got this
redirect, demo.modspdy.com, which redirects to
m.modspdy.com. And we know that’s costly. Mona has showed us. In fact, in this particular
environment, we’re using web page tests’ 3G modeling. Round trips are actually
300 milliseconds. So we’ve talked about 200 being
a good general target to be in between 3G and 4G. But for 200 milliseconds,
that is. For this particular demo, we’re
using a 300 millisecond round trip time. And on top of that, because it’s
on HTTPS, we’re looking at four round trips. So that redirect ends up costing
us, by itself, 4 times 300, 1.2 seconds. So the question becomes,
how do we avoid that? How do we avoid that cost? There are really two good
ways to approach this. At a high level, we have to make
sure that we serve the user content at the URL they
request initially, right? If we redirect them from
demo.modspdy.com to m.modspdy.com, we saw we’re
inherently going to experience that additional latency. So what we have to do is
instead serve the right content to the right users
at the URL they request. So what that means is
one of two things. Either user responsive design,
which allows you to serve the same HTML to all your users,
be it mobile users, desktop users, and the page will render
differently depending on the device characteristics. And I should say, that’s a
great approach if you’re building a site from scratch. I think that’s the
right way to go. But if you’ve got an existing
mobile website and desktop website, and you’re just trying
to figure out, well, how do I move from having this
redirect to not, then what you want to do is make sure that you
vary the HTML content that you serve to your users based on
the user agent coming in at the web server. And so if you’re getting a
request from a mobile user agent, you serve the mobile HTML
directly, and if you get a request from a desktop user,
you serve the desktop HTML for them as well. So it’s easy enough to
sort of say, just go ahead and do that. Let’s go ahead and do
a quick example. I’m actually on the web server
for modspdy.com now. And we can take a look in
the demo directory. We’ve got a couple files. This is an Apache web server. So I’m going to go ahead and
actually bring up the .htaccess file for
demo.modspdy.com, and we’ll see that– so .htaccess file is an Apache
file that lets you specify additional information about how
content should be served. And what we’ve done here is we
have this rewrite rule that basically says, conditionally
apply the following rule if the HTTP user agent matches
either iPhone or Android. So basically, a very simple
mobile user agent matcher. You could expand on this. And then if that matched,
go ahead and rewrite the empty URL– so that is the URL with just a
slash, essentially– no URL, just the host name– to HTTPS://m.modspdy.com. So that’s the costly one that
we just looked at that incurred that 1.2 seconds
of latency. So if, instead, we tell Apache
to rewrite that URL to a local file, then what we’ll get– so let me do this, actually. Let me go ahead and put
demo.modspdy.com on there, and we can see it redirect. So that was before. And if we just switch those– so now I’ll go ahead
and do that again. And now we can see that the
content that we had been re directing users to on m dot is
now served directly from demo.modspdy.com. And just to sort
of close that– so it’s a pretty simple thing
to configure, right? This is the Apache variant, but
if you use a different web server, they all support
this in different ways. And then just to look– so we’re saying, basically,
serve up mobile.php instead. Why don’t we look at what
that file looks like? So that’s just a sim link over
to the m.modspdy.com index file, right? Which is exactly what we were redirecting the user to before. And so now we’re able to
avoid that redirect. And let’s see the effect
of doing that on the page load time. So if we think about what we
expect to happen here, we’re removing, as we talked about,
four round trips– DNS, TCP, SSL, and request and
response, each of which cost 300 millis from the
time to display. We were at 6.6 seconds before,
and as a result of removing that redirect, indeed, we see
the load time of the page drop to 5.4 seconds, which is
exactly what we expect. So we’ve sort of confirmed
through our test environment here the result that we
would expect to see. And by the way, did
I mention– so I don’t know, how many
people are familiar with Web Page Test? OK, so not most. So we’ve used Web Page Test to
both produce these videos and the waterfall underneath, which
shows the resources that are loaded and the time that
each one is loaded at. It’s a really great resource,
webpagetest.org. You can tell it, show me what
my page looks like over a 3G connection from various
locations around the world using different kinds of
devices, get these videos, look at still frames. It’s a really rich tool to
understand what the experience your users are seeing is. And so we’ve use that to
create these here. OK, so let’s go ahead and
dive into the next. So we’ve improved
the page, right? We’ve gone from 6.6 seconds to
5.4 seconds, but it’s still by no means fast. So let’s go ahead and dive
in and talk about server processing time. So there’s sort of two things
you want to think about when you think about server
processing time. One is, what is your server
processing time? How do you measure that? And two, if it’s high, why is
it high, and what can you do to reduce that? And so we can go ahead and take
a look at that actual page we have. I’ll bring up Chrome Dev Tools,
bring up the network tab, and then I’m going to
go ahead and reload. And so we can see here that
the waiting time, when we click on the resource for
demo.modspdy.com and the timing, the waiting time– I don’t know if you can read
that, it’s pretty small– is 1.6 seconds. So we’ve got 1.6 seconds of time
between the time we sent the request and the time
the first bytes of the response came back. So it’s quite high. We’d expect to see maybe one
network round trip there, but 1.6 seconds is way above
that, right? And so then the question
becomes, well, why? What’s going on there? Why is it so high? And so it’s a little bit outside
the scope of the talk to sort of figure out and
understand server processing time deeply, but at a high
level, what you need to do is essentially measure
this server side. So what are those times? And then, ideally, have some
monitoring infrastructure in place that helps you understand
if it’s high, where that time is going. And so one of the tools
we like for this is called New Relic. They have a free offering that
you can use, and it lets you see at a high level where time
is going within your application. So I’ll go ahead and just bring
up the New Relic view for this webpage. And so this graph in the middle
here is showing us over time how long the server took to
generate various responses that were requested for
the particular URL we’re looking at. And so we’ve got a breakdown by
database time and PHP time, and so we can see that, by and
large, recently, anyway– and so I should say these pits
are just places where there were no requests. It’s a demo page, so
there’s not a whole lot of activity here. But by and large, we’re seeing
pretty substantial time spent in the database querying, and
a little bit of time in that blue, though nonzero
time, right? A couple hundred mills in the
PHP execution environment. And so then the question
becomes why? What are we doing? And what can we do to sort
of address those? So I’ll just look really
quickly at our page. And this is just
a simple demo. But sure enough, we’ve
got two things. Get data from database, and then
render the HTML with that data, right? And I’ve sort of created the
queries in such a way that they’re intentionally slow for
the purpose of this demo, but this is something we see pretty
often is that pages will have multiple second for
spike times as a result of spending a lot of time either
in the database or executing PHP, or possibly some
other reason. But in any case, now that we
know that, the question becomes what can we do
to reduce this time? And our options are really
remove, defer, or optimize. And in this case, I observe
the page is generated dynamically, but it’s really
static content, right? And this is a common pattern
you see, too, right? You’ve got a page that’s mostly
static or it might change periodically, but
fundamentally, it doesn’t change on every request, but
it’s still generated dynamically from the database
on every request, and that ends up, in some cases, adding
a good bit of latency. So in this case, because it’s
mostly static, we can simply just whenever we update the
database to have a new product or whatever it may be, just
render that to HTML, right? Dump that to a static HTML file
on the web server, and then just serve that instead. And so I’ve done that. Or I thought I did that. I did do that. It’s over here. And we can see now, we’ll
go ahead and reload it and take a look– that our waiting time has gone
from that 1.6 seconds down to 84 milliseconds, because we’re
not invoking the database and running the PHP engine
on every request now. We just rendered this to static
content, and we’re serving that instead. And so, in general, as much as
you can precompute ideally all of the content, if your pages
doesn’t have any dynamic pieces– many pages do have a
little bit that’s dynamic, but if you can precompute the
majority of it to minimize the work in the request path of
the user, you’ll create a better experience
for your users. So that’s server processing
time. And so now we can see–
that was about 1.5, 1.6 seconds we saw. And indeed, again, we were
at– what was it? 5.4 seconds? And now we’re down
to 3.8 seconds. So we’ve reduced render time,
again, by another 1.6 seconds. And you can see that if we go
backwards a little bit, this sort of green region right here
in the old version was quite long. That was that server
processing time. And we can see now that that
green region is much shorter. And that’s where we’ve pulled
in time, and that’s resulted in a faster render
on the screen. So let’s keep going. We’re doing well, but
we’re not to our one second target yet. And I should– I guess a spoiler alert. It’s physically impossible to
get to one second in this configuration. So we’re not going to get there,
but we’re going to get as close as we can get. We’re going to get
quite close. So let’s see how close
we can get. So as Doantam talked about, the
load of our external CSS resource blocks rendering
of page content. So that actually ends up
incurring seven round trips on the network, and at 300
milliseconds, that’s a very substantial cost. So that’s the first thing we can
do– a very simple, very easy thing– is just simply
experiment with inlining all that content. And people do this
a lot on mobile. It’s a pretty common
technique, right? Just inline, inline, inline. As Doantam talked about, there
are some drawbacks to that. So we’ll start with that. We’ll start with inlining, and
then we’ll iterate from there. So if this is our first page,
with the external stylesheet, then we can simply, as Doantam
showed, inline all the styles and serve it up. And so what does this actually
do in terms of load time performance? And so what we see is we’re down
from, I believe it was 3.6 seconds before, now
down to 2.4 seconds. So we’ve removed 1.2 seconds,
which, interestingly, is the four round trips from the DNS,
TCP, SSL and request response of that external stylesheet. We’ve essentially eliminated
those. We still have the round trips
for fetching the stylesheet, though, right? They’ve just moved
to be inline. And we’re still paying
that cost. And worse than that, we’ve
moved those assets from a cacheable resource–
a CSS file– into sort of a non-cacheable
HTML payload. So repeat visitors to our
site are going to end up downloading that content on
every visit, which is pretty undesirable, right? And so let’s see as a final sort
of optimization if we can go ahead and address
that issue and make the page faster. So the only issue we’re faced
with now is that we’ve got this large blob of CSS in the
head, and as Doantam talked about, that ends up delaying
render of the page due to the TCP congestion window growth. And so what we want to do is
essentially identify the critical CSS– that CSS that’s needed
to style and position content on the page– and load that inline
in the head. And ideally, that’s small. And then defer the
non-critical CSS. So let’s see what that
might look like. So if we take note of the fact
that the CSS is largely data URIs, right? And those data URIs are big. They also don’t compress very
well, so they end up taking a lot of time on the network. If we say, well, we’ll reserve
this space for those images in the HTML– we’ll carve out that 100 by
100 pixel block, and we’ll make sure to put that style in
early so things don’t move around when the stylesheet
loads, but then we’ll load the remaining content in a deferred
stylesheet in a way that doesn’t block render,
then we can make the page faster and recover a lot of
the caching benefits of externalizing that content
in the CSS resource. So let’s go ahead and take a
look at the effect of that. And so here we’ve achieved an
even more dramatic speedup of the first paint time. Now we’re at 1.5 seconds
to display almost all the content. You can see that the Chrome
icon comes in a bit later. And so it’s interesting to
look at the waterfall. For the first time, we’ve
actually moved the paint line from the end in, and then we can
see the first paint line here– this green line– actually happens, essentially,
before the deferred stylesheet has to load. So we’ve basically achieved a
render very shortly after the four round trips we incur for
the network cost, which is about as good as we can
do on this page. And then the remaining deferred
styles come in later, and they automatically kind of–
we’ll watch it again just to see it, right? The Chrome icon comes
in a slightly later time at the end. So one other thing I did just
as a kind of advanced optimization– if some of the icons on your
page are really high priority, you can inline low resolution
previews of those, and that’s what I’ve done with the
PageSpeed icon. And that causes that content to
show up a little sooner so the user can see it, but doesn’t
have the cost of downloading the full
image asset inline in a blocking manner. So just real quickly to close,
let’s go ahead and look at where we were and where
we ended up. So we went from a the first ping
time of 6.6 seconds to about 1.5 seconds, which is just
about as low as we can get, right? That’s the sort of absolute
minimum is the four round trip times of 1.2. So we’ve got a little bit of
browser processing time in there, and we’ve basically
streamlined this as much as possible. So that’s it. So at a high level, designing
for high latency means following these four
best practices. Avoiding landing page redirects,
minimizing server processing time, eliminating
render blocking round trips, and prioritizing visible
content. Any questions? AUDIENCE: So, several of the
tricks that you proposed here would cause the same URL to
serve both mobile and non-mobile content. I think that’s still
considered a no-no by search engines. Is that– BRYAN MCQUADE: No. So I’m not a search expert,
but there’s a good bit of content on the webmaster site
for Google specifically that talks about the different ways
you can address this issue. One of them is to have separate
URLs, but we actually show how you can support
both responsive design and varying the HTML. So both those techniques are
supported, at least by Google, and really should be by
all search engines. If they’re not, then– Does that– AUDIENCE: Yeah. I mean, in this specific
example, the site was very clearly mobile. Like, I wouldn’t want to
see that on a browser. BRYAN MCQUADE: Oh, I see. So what I didn’t actually
do there– there was a line in my .htaccess
that actually allowed it to conditionally
execute that redirect to the mobile thing, depending
on the user agent. So we would still send– I can actually really quickly
just– right. So if I actually enable this– I didn’t, because it made it
hard to actually use the demo on a desktop. But now if I re-enable that
rewrite [INAUDIBLE], if you now go to demo.modspdy.com,
well– I don’t have an index.html
now. AUDIENCE: Yeah. Because it’s a back end
redirect, though, if you go to that same URL on a mobile
site, you might get something different. So that’s why I kind of– I assume that this was not
intended to change the recommendation from Google. This was just an example. BRYAN MCQUADE: So yeah. So at a high level,
you should– and I apologize for this. I’m not actually sure
what’s going on. I can debug it in a moment. At a high level, it’s totally
fine to vary the HTML you send as a function of user agent. We say you should include the
vary user agent header in the response as well to give us a
heads up that it does vary as a function of user agent. AUDIENCE: Thanks. BRYAN MCQUADE: Yep. AUDIENCE: I have a question. This is kind of a throwback. Have you played around with
using progressive images? BRYAN MCQUADE: I think that’s
another talk in itself. So we’re looking at that now,
and we’re thinking about what is optimal there, but it’s
definitely a big challenge, I think, to do optimally
and efficiently. I’d be happy to– maybe we
can chat afterwards. AUDIENCE: OK. AUDIENCE: Hi. I’m a user of page speed
services, and I find this very cool. And I have a question about
how to reduce the latency about subs request, since we
know that in order for page speed service to optimize a
page, it will fetch the page first, and then make several
sample requests. Could you share a bit about
your consideration about deploying processing centers
for page speed services in order to make a great
global product? BRYAN MCQUADE: So I think Elia,
another person on our team, will be giving a talk
about PageSpeed products later today, and that might
be a better question just to ask to him. Or we could maybe
chat afterwards. AUDIENCE: OK. Thank you. BRYAN MCQUADE: Any
other questions? OK. Thank you. Oh, we’ll do one more. AUDIENCE: OK, so I noticed you
used some JavaScript magic there to make the CSS load
in a deferred matter. Is there any techniques that are
maybe coming to tell the browser in the style tag to
say, defer this later? I don’t need to do that. BRYAN MCQUADE: I wish that
existed, and that’s something that, I think, is talked
about a little bit. I think it’s needed. Anytime you have to use a little
JavaScript snippet, it feels a little wrong, right? AUDIENCE: Well, it’s really
verbose, and you can’t really know what’s going
on unless you– BRYAN MCQUADE: Right, right. Yeah. So I think that’s where
we want to be. There’s no mechanism to express,
basically, I want to load this stylesheet, but don’t
have [? a ?] block the render today, and so you
have to do it in that mechanism currently. But I think that’s where
we should be moving. AUDIENCE: Thank you. BRYAN MCQUADE: Yep. AUDIENCE: So I had a question
about putting an upstream cache in front of these
web servers. And your recommendation is to
vary on the user agent, which means the cache is basically
made ineffective. BRYAN MCQUADE: So that’s a
good question, actually. I don’t know if there was a
video from the webmaster team recently about this. So we talked to some
of the big CDNs– someone at Akamai, for
instance– and they actually walked us through basically how
you would enable this use case using their system
specifically. I can point you at that if
that would be helpful. Basically, it is a solvable
problem that requires a little bit of additional configuration
on the CDN side. OK. Thank you.

12 Replies to “Google I/O 2013 – Instant Mobile Websites: Techniques and Best Practices

  1. I'm going to say this while I can. G+/chat/hangouts support on Windows Phone. Google's spearhead product with a very lacking UX. Makes it hard to love a company.

  2. In the video it was suggested that we put up a mobile page for the home page instead of doing a home page redirect.  How would this be accomplished if the mobile pages are on a separate server?

  3. You seem to have forgotten to mention, the delayed css parsing technique relies on js and does not account for users with JavaScript disabled. Bad approach imo

  4. We can always rely on Google to give us the hard facts when it comes to anything web-related. This is the best video to watch if you’re not yet convinced to make your website mobile friendly.

Leave a Reply

Your email address will not be published. Required fields are marked *