When you use a bog-standard WordPress install, the caching header in the HTML response is
Cache-Control: max-age=600
OK, cool, this means cache the HTML for 10 minutes.
Additionally these headers are sent:
Date: Sat, 07 Dec 2024 05:20:02 GMT
Expires: Sat, 07 Dec 2024 05:30:02 GMT
These don't help at all, because they instruct the browser to cache for 10 minutes too, which the browser already knows. These can actually be harmful in cases of clocks that are off. But let's move on.
WP Super Cache
This is a plugin I installed, made by WP folks themselves, so joy, joy, joy. It saves the generated HTML from the PHP code on the disk and then gives that cached content to the next visitor. Win!
However, I noticed it ads another header:
Cache-Control: max-age=3, must-revalidate
And actually now there are two cache-control headers being sent, the new and the old:
What do you think happens? Well, the browser goes with the more restrictive one, so the wonderfully cached (on disk) HTML is now stale after 3 seconds. Not cool!
A settings fix
Looking around in the plugin settings I see there is no way to fix this. There's another curious setting though, disabled by default:
[ ] 304 Browser caching. Improves site performance by checking if the page has changed since the browser last requested it. (Recommended)
304 support is disabled by default because some hosts have had problems with the headers used in the past.
I turned this on. It means that instead of a new request after 3 seconds, the repeat visit will send an If-Modified-Since header, and since 3 seconds is a very short time, the server will very likely respond with 304 Not Modified response, which means the browser is free to use the copy from the browser cache.
Why 600? I'd do it for longer but there's this other Cache-Control 600 coming from who-knows-where, so 600 is the max I can do. (TODO: figure out that other Cache-Control and ditch it)
Why stale-while-revalidate? Well, this lets the browser use the cached response after the 10 minutes while it's re-checking for a fresher copy.
Here you can see no more requests for HTML, just one for stats. No static resources either (CSS, images, JS are cached "forever"). So the page is loaded completely from the browser cache.
Animated gifs are fun and all but they can get big (in filesize) quickly. At some point, maybe after just a few low-resolution frames it's better to use an MP4 and an HTML <video> element. You also preferably need a "poster" image for the video so people can see a quick preview before they decide to play your video. The procedure can be pretty simple thanks to freely available amazing open source command-line tools.
Step 1: an MP4
For this we use ffmpeg:
$ ffmpeg -i amazing.gif amazing.mp4
Step 2: a poster image
Here we use ImageMagick to take the first frame in a gif and export it to a PNG:
$ magick "amazing.gif[0]" amazing.png
... or a JPEG, depending on the type of video (photographic vs more shape-y)
... with your favorite image-smushing tool e.g. ImageOptim
Comments
I did this for a recent Perfplanet calendar post and the 2.5MB gif turned to 270K mp4. Another 23MB gif turned to 1.2MB mp4.
I dunno if my ffmpeg install is to blame but the videos didn't play in QuickTime/FF/Safari, only in Chrome. So I ran them through HandBrake and that solved it. Cuz... ffmpeg options are not for the faint-hearted.
Do use preload="none" so that the browser doesn't load the whole video unless the user decides to play. In my testing without a preload=none Chrome and Safari send range requests (like an HTTP header Range: bytes=0-) for 206 Partial Content. Firefox just gets the whole thing
The Fermi Paradox is a contradiction between high estimates of the probability of the existence of extraterrestrial civilizations, such as in the Drake equation, and lack of any evidence for such civilizations.
There are billions of stars in the galaxy that are similar to the Sun, including many billions of years older than Earth.
With high probability, some of these stars will have Earth-like planets, and if the Earth is typical, some might develop intelligent life.
Some of these civilizations might develop interstellar travel, a step the Earth is investigating now.
Even at the slow pace of currently envisioned interstellar travel, the Milky Way galaxy could be completely traversed in about a million years.
According to this line of thinking, the Earth should have already been visited by extraterrestrial aliens. In an informal conversation, Fermi noted no convincing evidence of this, nor any signs of alien intelligence anywhere in the observable universe, leading him to ask, “Where is everybody?”
Many have argued that the absence of time travelers from the future demonstrates that such technology will never be developed, suggesting that it is impossible. This is analogous to the Fermi paradox related to the absence of evidence of extraterrestrial life. As the absence of extraterrestrial visitors does not categorically prove they do not exist, so the absence of time travelers fails to prove time travel is physically impossible; it might be that time travel is physically possible but is never developed or is cautiously used. Carl Sagan once suggested the possibility that time travelers could be here but are disguising their existence or are not recognized as time travelers.
It seems, to me at least, clear evidence that time travel is not possible, given the enormous amount of time behind us. Something, somewhere, would certainly have invented it by now... right?
The Great Filter theory says that at some point from pre-life to Type III intelligence, there’s a wall that all or nearly all attempts at life hit. There’s some stage in that long evolutionary process that is extremely unlikely or impossible for life to get beyond. That stage is The Great Filter.
We are not a rare form of life, but near the first to evolve
Almost no life makes it to this point
Those are three Great Filter possibilities, but the question remains: why are we so alone in the observable universe? I grant you that what we can observe is appallingly tiny given the unimaginable scale of the universe, so “what we can observe” may not be enough by many orders of magnitude.
I encourage you to read the entire article, it’s full of great ideas explained well, including many other Great Filter possibilities. Mostly I wanted to share my personal theory of why we haven’t encountered alien life by now. Like computers themselves, things don’t get larger. They get smaller. And faster. And so does intelligent life.
Why build planet-size anything when the real action is in the small things? Small spaces, small units of time, everything gets smaller.
Large is inefficient and unnecessary. Look at the history of computers: from giant to tiny and tinier. From slow to fast and faster. Personally, I have a feeling really advanced life eventually does away with all physical stuff that slows you down as soon as they can, and enters the infinite spaces between:
This is, of course, a variant on the Fermi paradox: We don’t see clues to widespread, large-scale engineering, and consequently we must conclude that we’re alone. But the possibly flawed assumption here is when we say that highly visible construction projects are an inevitable outcome of intelligence. It could be that it’s the engineering of the small, rather than the large, that is inevitable. This follows from the laws of inertia (smaller machines are faster, and require less energy to function) as well as the speed of light (small computers have faster internal communication). It may be – and this is, of course, speculation – that advanced societies are building small technology and have little incentive or need to rearrange the stars in their neighborhoods, for instance. They may prefer to build nanobots instead.