Matthew Petroff https://mpetroff.net mpetroff.net Sat, 17 Apr 2021 22:10:18 +0000 en-US hourly 1 Climbing Cerro Zapaleri https://mpetroff.net/2021/04/climbing-cerro-zapaleri/ https://mpetroff.net/2021/04/climbing-cerro-zapaleri/#respond Sat, 17 Apr 2021 22:10:18 +0000 https://mpetroff.net/?p=3406 40 km from the nearest … Continue reading ]]> Last month, I climbed Cerro Zapaleri, the 5648 m tall summit of which forms the tripoint of the borders of Chile, Argentina, and Bolivia.1 Its location is quite remote, ~105 km from San Pedro de Atacama, Chile and >40 km from the nearest paved road, both as the crow flies. After researching previous accounts of ascents and poring over high-resolution satellite imagery to map out routes to get to the mountain and to climb it, it was time to depart. As expected, a high-clearance four-wheel drive vehicle would prove to be necessary.

Cerro Zapaleri

A colleague of mine working on the CLASS project and I left our accommodations in San Pedro de Atacama shortly after 5 am2 and met up with some associates from the ACT project, in a second four-wheel drive pickup truck, at the start of the Jama road (CH 27) at around 5:30 am. As with previous climbs of Lascar and Cerro Toco, we informed others of our plans and took a satellite phone and satellite-based locator beacon as precautions. We then proceeded to drive toward the Argentinean border, to kilometer 147.5,3 arriving just before 7 am. We then turned off the Jama road and began following a dirt track north after crossing the buried gas pipeline. As it was still dark, finding the turn-off was somewhat challenging, although once we found it, following the dirt track was not too difficult (but a bit bumpy). We stopped to watch part of the sunrise over Laguna Helada.

Sunrise over Laguna Helada

As we continued to drive north, we crossed two washes. The first had some water, but the second was dry; neither presented any challenge to cross. Shortly after 8 am, we reached Río Zapaleri, at a location just upriver from where the Quebrada de Chicaliri tributary joins. While tracing out the route to the mountain on the satellite imagery, I was concerned that we would not be able to ford the river here. No accounts of previous ascents involve taking this route. Most cross the river at the Argentinean border and climb via the gulch just over the border4 or ford the river on foot and climb by a similar route. The tracks visible in the satellite imagery showed that the standard route was considerably better traveled, crossing the border at “Paso Zapaleri.” However, this would result in a significantly longer climb, which I sought to avoid. Fortunately, we were able to ford the river at the Quebrada de Chicaliri confluence without difficulty, although our pickup truck’s high clearance proved essential in doing so. This year, February and March were drier than normal, and our March 16 climb was a week after the last time it had snowed on Zapaleri (per Planet Labs satellite imagery); if it had been wetter, we might not have been able to cross.

About to cross Río Zapaleri

Río Zapaleri

We then drove out of the ravine5 and continued north toward the Bolivian border. Around 2.5 km from the Bolivian border, we turned off the established track6 toward the northeast and began following the bottom of a ravine in the direction of the Zapaleri summit. We continued up this ravine until the combination of the steep slope and the high altitude meant that we did not have sufficient engine power to go farther, even in 4L, and then parked the trucks perpendicular to the slope shortly before 9 am. My prediction for the trailhead based on the satellite imagery and digital elevation models proved to be quite accurate; GPS coordinates put the location where we parked only ~20 m from the location that I had marked on the map before setting out.

Pickup truck parked at trailhead

As Zapaleri is rarely climbed, and even more rarely climbed via the route I planned out, there was no trail. We began our climb by heading east, to the top of one of the ridges that border the ravine we drove up.7 We then followed this ridge to the northeast until it met a larger ridge. We continued climbing to the northwest along the larger ridge to near the summit. In retrospect, the climb would have been easier if we had climbed parallel to the ridge but slightly downhill from it toward the northeast. This would have avoided climbing over some of the outcroppings that are along the ridge and kept us out of the wind. However, the climb along the ridge provided an outstanding view toward the southwest, which we would have missed if we had stayed out of the wind.

Climbing Cerro Zapaleri

Cerro Zapaleri summit from distance

Rock formation on Cerro Zapaleri

Next, we turned toward the northeast and headed for the summit. The final summit is quite impressive and abruptly sticks up from the rest of the mountain.8 The only non-technical route up it is from the west, and this involves a scramble up an extremely steep slope comprised of loose rock and scree. If we had known this in advance, my colleague and I would have brought climbing helmets, which I would recommend for safety. We climbed the slope while following the base of the cliff face, since this gave additional hand-holds. We climbed one at a time and stopped at regular intervals in areas of more secure footing to allow each other to catch up, since this reduced the risk of being hit by rocks that were knocked loose.

Cerro Zapaleri summit

Cerro Zapaleri summit

Finally, we reached the summit at ~1:15 pm. The summit contains a three-sided painted steel border monument and is surrounded by steep drop-offs. The view is incredible. Additionally, if one walks north of the border monument, the bright green crater lake is visible.9 Although the mountain is not climbed particularly often, we did find a note left by recent climbers in a small, empty liquor bottle; apparently, a group of Argentinian narcotrafficking police officers climbed on 24 December 2020.

View from summit of Cerro Zapaleri

Looking down from summit monument

Cerro Zapaleri summit

Cerro Zapaleri summit

Green crater lake viewed from Cerro Zapaleri summit

Using the Shuttle Radar Topography Mission 1-arcsecond (~30 m) digital elevation model (DEM), I calculated an elevation profile for the climb. According to these data, the approximately 2.4 km climb had ~630 m of vertical gain, resulting in an average grade of ~26%; the maximum grade was ~48%. However, the DEM does not have the resolution necessary to resolve the summit features, and the final push to the summit was almost certainly even steeper. Additionally, the DEM seems to systematically underestimate the elevation, at least when compared to the GPS track I recorded (which put the summit elevation at ~5700 m).

After around an hour at the summit, we began to head back down. While my colleague from CLASS had made it to the summit with me, our associates, some of whom were not as well acclimatized, did not, although some came close. We climbed slower than we would have otherwise and waited at the summit for them to catch up, but with the wind and the cold, waiting around while not moving is not particularly pleasant, and we eventually gave up and departed. The way down from the summit proved to be the most difficult part of the climb. Due to the steep slope and loose rocks and scree, it involved mostly sliding down on one’s derriere and using one’s feet and hands to try to not slide too much and to not knock loose too many rocks. Once down from the summit, the rest of the climb down was not too bad, although the steep slope again made progress somewhat slow. We stayed downhill from the ridge we had climbed up on, to stay out of the wind. We made it back to the trucks at around 4 pm, seven hours after we had started climbing. I ended up going through around half of the ~3.5 L of water that I had carried. After a brief rest, we began driving back to San Pedro de Atacama. Driving both to and back from Zapaleri, we saw many, many herds of vicuñas,10 and on the way back, we also saw a half dozen puna tinamous. We made it back to the paved Jama road at ~5:30 pm and back to San Pedro de Atacama at ~7:15 pm, roughly 14 hours after we had left.


After returning to the United States, I did some research on the origin of the summit border monument to try to determine when it was installed. The best resource I could find on this was a 1953 report on marking the border between Argentina and Bolivia (which is also where the 5648 m elevation I used in the first sentence of this blog post came from).11 In addition to a written description of the border and survey, the report contains detailed maps, photos, and drawings of the border monuments.

Line drawing of border monument design with dimensions marked

Although the border between Chile and Argentina was surveyed and marked around 1900, the summit of Cerro Zapaleri was neither surveyed nor marked at that time.12 However, the Argentina–Bolivia survey, which started in 1939, did mark it on 30 November 1940.

Photo of two men installing border monument on the summit of Zapaleri

If one compares the photo and drawing of this original border monument to the photos of the current one earlier in this blog post, it is readily apparent that the current one is much shorter than the original 3.5 m tall monument, approximately half the height. A closer inspection of the cross-bracing reveals that the monument was not just shortened but that it was completely replaced; the cross-bracing on the current monument is installed on the outside of the angle irons instead of on the inside,13 and the bracing on the current monument connects to the angle irons farther from the top of the monument than on the original. Unfortunately, I was unable to determine when or why the monument was replaced. However, graffiti on the current monument mentions years dating back to 1997, so the monument was clearly replaced sometime before then. In addition to the monument, I located three brass survey markers. The Bolivian Instituto Geográfico Militar installed a triangulation station marker and an associated reference marker in 1965, and the Chilean Instituto Geográfico Militar installed a triangulation station marker in 1970.14 I was unable to locate any documentation on these markers, so I can’t say whether or not the replacement of the border monument was associated with the survey marker installations.


  1. This was at the end of a nine-week trip to Chile for telescope repair and maintenance work. Traveling during the COVID-19 pandemic, even with an N95 mask and PCR tests, was a nightmare, particularly for the flights in the United States, and I would not have done so if the repair work wasn’t necessary.  

  2. This was the earliest we could leave, since the COVID-related curfew ended at 5 am. However, leaving earlier would have involved more driving off-road in the dark, which is less than ideal.  

  3. An alternative turn-off at kilometer 144.4 can also be used.  

  4. This is consistent with what was communicated to me via my colleagues from the local tour guides they know.  

  5. The track was extremely rutted on the steep slope, and I would highly recommend using 4L for this to avoid having to maintain higher speeds to maintain engine power.  

  6. Per satellite imagery, the track continues on and crosses the Bolivian border.  

  7. Heading directly toward the summit would have been a much steeper and more difficult route.  

  8. I attempted to use photogrammetry to reconstruct a 3D model of the summit using drone footage from a Chilean TV show. Unfortunately, the low-quality video frames combined with a lack of orbiting shots led to a failed reconstruction. I didn’t bring my DJI Mavic Mini to take my own photos for photogrammetry, since it can’t handle the high altitude (it barely flies at ~5200 m without any wind).  

  9. We considered stopping at the lake on the way down but decided against it, since it would have involved an additional uphill climb to return from it.  

  10. These vicuñas were far more skittish than the vicuñas found along the Jama road or on Cerro Toco.  

  11. Informe Final De La Comision Mixta Demarcadora De Limites Argentina–Bolivia. Buenos Aires: Talleres Gráficos del Instituto Geográfico Militar, 1953.  

  12. La Frontera Argentino–Chilena: Demarcación General, 1894–1906. Buenos Aires: Talleres Gráficos de la Penitenciaria Nacional, 1908.  

  13. A detailed photo of a different monument in the same publication shows that the actual monuments matched the drawing.  

  14. I did not find any markers from Argentina.  

]]>
https://mpetroff.net/2021/04/climbing-cerro-zapaleri/feed/ 0
Baking a Sierpiński Carpet Linzer Cookie https://mpetroff.net/2021/03/baking-a-sierpinski-carpet-linzer-cookie/ https://mpetroff.net/2021/03/baking-a-sierpinski-carpet-linzer-cookie/#comments Tue, 30 Mar 2021 23:00:49 +0000 https://mpetroff.net/?p=3369 Continue reading ]]> As a follow-up to my previous entries for the Ashley Book of Knots and Space-Filling Curves, I decided to enter a submission into this year’s Johns Hopkins University Sheridan Libraries’ (virtual) Edible Book Festival contest for Mandelbrot’s The Fractal Geometry of Nature. This raises the questions of which fractal to use and how to make it edible. To this end, I decided to bake a Sierpiński carpet Linzer cookie. The zeroth iteration of the fractal, a square, forms the first layer of the cookie, while the first three iterations of the fractal form three additional layers, for four cookie layers in total.

Photo of a Sierpiński carpet Linzer cookie

In order to create the 15 cm square cookie, I designed and 3D printed a set of cookie cutters out of PLA plastic.1 These consisted of a wavy-edged square for the border of all the layers, a smaller square for the center of the three fractal layers, an even smaller square for second fractal iteration layer, and the smaller square with eight tiny squares around it for the final iteration layer. This set of cutters was a compromise between having a separate, complete cutter for each layer—which would have required more print time and material—and only making one cutter per size of cutout—which would be more difficult to align and use.

Photo of cookie cutters for a Sierpiński carpet Linzer

An existing Linzer cookie recipe was used for creating the cookie.2 After preparing the dough and using the cookie cutters, the cut dough was placed in the freezer for a few minutes to cool to make it easier to transfer the large cookie layers to the baking sheet. The cookie layers were then baked. Since dough expands when baked—and baking is not great for maintaining dimensional tolerances in general—the cookie cutters were used again on the cookie layers’ internal cutouts as soon as the layers were removed from the oven. The baked cookie layers were then placed in the freezer and frozen to make them more durable and easier to handle.3 Once frozen, the layers were removed, and the edges of the cutouts were cleaned up with a knife and, in the case of the tiny squares on the final layer, a paperclip. The final layer was then dusted with powdered sugar, and the layers were assembled into the final cookie using raspberry jelly between the layers.

Photo of a Sierpiński carpet Linzer cookie

Photo of a Sierpiński carpet Linzer cookie

For a different take on Sierpiński carpet cookies, see Evil Mad Scientist Laboratories’ 2008 blog post on the subject.


  1. I have uploaded the cookie cutter designs

  2. The recipe is for 20 three-inch cookies, which was just enough for one, large Sierpiński carpet cookie.  

  3. I still managed to crack the final layer in half, but the damage is not particularly noticeable under the powdered sugar.  

]]>
https://mpetroff.net/2021/03/baking-a-sierpinski-carpet-linzer-cookie/feed/ 1
Space-efficient Embedding of WebAssembly in JavaScript https://mpetroff.net/2021/02/space-efficient-embedding-of-webassembly-in-javascript/ https://mpetroff.net/2021/02/space-efficient-embedding-of-webassembly-in-javascript/#respond Sat, 20 Feb 2021 15:01:07 +0000 https://mpetroff.net/?p=3336 Continue reading ]]> Recently, I came across a blog post about converting parts of a JavaScript library into WebAssembly. The part that interested me the most was a section about efficiently embedding the WebAssembly binary into the JavaScript code such that the library could be distributed as a single file, instead of the usual method of providing the WebAssembly binary as a separate file. This is accomplished by Base64-encoding the WebAssembly binary as a string and including the resulting string in the JavaScript file. Unfortunately, this significantly inflates the total file size, since the Base64-encoded string does not compress nearly as well as the original binary. To mitigate this issue, the blog post author had the clever idea of gzip-compressing the binary prior to Base64-encoding it and using the zlib.js JavaScript library to decompress the binary client-side, after undoing the Base64-encoding. While this significantly reduced the size of the Base64-encoded WebAssembly binary, it required ~6.5 kB for the decompression code, after gzip compression.1

While I liked the idea of compressing the WebAssembly binary prior to Base64-encoding it, I thought there must be a way of decompressing it with a smaller decompression code. The simplest change would be to use a raw Deflate-compressed payload instead of one encapsulated with gzip, as the zlib.js library also provides a decompression function for this, which is only ~2.5 kB after gzip-compression, saving ~4 kB. However, this is still excessive, since it shouldn’t be necessary to provide a Deflate decompression function as web browsers include such functionality. Although such decompression functionality isn’t exposed directly to JavaScript, PNG images can be decoded from JavaScript, and PNG images use Deflate compression. Thus, I decided to encode the WebAssembly binary as a grayscale PNG image, Base64-encode the PNG as a data URI, and include the resulting string in the JavaScript file.

To encode the binary as a PNG image, the dimensions of the image must first be decided on. For this, I decided to set the image width to the smallest power of two that allowed the image to have a landscape aspect ratio, although this decision was somewhat arbitrary. Each pixel in the grayscale image corresponds to one byte in the WebAssembly binary, starting in the top-left corner of the image and wrapping line-by-line. Any remaining pixels in the last row of the image were set to zero, but this presented a problem, since zero-padding a WebAssembly binary is not allowed. Thus, the first four pixels of the image are used to store the size of the WebAssembly binary as an unsigned 32-bit little-endian integer, which can then be used by the decoder to truncate the image data to the correct length. The resulting PNG image can then be optimized using tools such as OxiPNG to reduce its file size further, after which the PNG image is Base64-encoded as a data URI.

To decode the Base64-encoded PNG image into WebAssembly from JavaScript, the Image() constructor is used to create an <image> element from the Base64-encoded string. Then, the <image> is drawn to a <canvas> element, and the getImageData() method is used to extract the image data as an array. The array is then filtered to keep only every fourth pixel, to convert the RGBA data to grayscale.2 Next, the first four bytes containing the WebAssembly binary length are decoded, removed from the array, and used to truncate the array to just the WebAssembly binary data. Finally, these data are used to instantiate the WebAssembly code. The decoding routine is <0.3 kB after gzip compression.

I have made available a demo that includes an encoding procedure written in Python, the JavaScript decoding procedure, and a live example using MDN’s simple WebAssembly example. While this demonstrates the technique, it doesn’t provide a meaningful example of the bandwidth savings. Thus, I also applied the technique to the ammo.js WebAssembly demo. For this example, the original WebAssembly binary is ~651 kB uncompressed, ~252 kB when compressed with gzip -9, and ~218 kB when compressed with brotli -9. As a Base64-encoded string, it is ~880 kB, which is reduced to ~358 kB when compressed with gzip -9 or ~316 kB when compressed with brotli -9. When converted to a PNG image using the technique described in this blog post and optimized using OxiPNG, the resulting image is ~242 kB. When converted to a Base64-encoded data URI, it is ~322 kB, which is reduced to ~244 kB when compressed with gzip -9 or ~242 kB when compressed with brotli -9. While not quite as small as the Brotli-compressed binary, the technique described in this blog post does better than the gzip-compressed binary and does much better than naively include a Base64-encoded binary in JavaScript.


  1. The author mentioned 12 kB of decompression code, but that was without gzip-compression of the JavaScript code.  

  2. This step wouldn’t be necessary for a RGBA PNG, but these don’t seem to compress WebAssembly binaries quite as well as grayscale PNGs.  

]]>
https://mpetroff.net/2021/02/space-efficient-embedding-of-webassembly-in-javascript/feed/ 0
Update on Figure Caption Color Indicators https://mpetroff.net/2020/10/update-on-figure-caption-color-indicators/ https://mpetroff.net/2020/10/update-on-figure-caption-color-indicators/#respond Sat, 31 Oct 2020 21:15:08 +0000 https://mpetroff.net/?p=3285 Continue reading ]]> Last year, I published a blog post on figure caption color indicators. The positive feedback I received on it from a number of individuals prompted me to revisit the subject. At the time, I did not have a good way of locating published examples of such caption indicators and was only able to locate a few published examples with shape indicators but none with color indicators. When thinking about revisiting the subject, I had the epiphany that although searching for such indicators in the published literature is next to impossible, searching in the LaTeX source markup for papers is not. As arXiv provides bulk access to the TeX source markup for its pre-prints, this provided a large corpus of manuscripts to search through. After finding examples in pre-prints, I was then able to see if the indicators survived the publication process and was thereby able to locate well over one hundred examples of color line or shape indicators in the figure captions of published academic papers.

I broke the process into four steps: acquiring the data, extracting LaTeX commands from caption environments, finding potential figure caption candidates, and verifying these candidates. As the arXiv source archive is well over 1 TB in size, it is provided in an AWS S3 bucket configured such that the requester pays for bandwidth, which would result in a bandwidth bill of >$100 if downloaded directly. As I was only interested in the TeX source and not the figures, which account for most of the total file size, and since AWS does not charge to transfer between S3 buckets and EC2 instances in the same region, I first ran a script on an EC2 instance to download from arXiv’s S3 bucket and extract and repackage just the TeX source files. This allowed me to greatly reduce the amount of data transfer required and allowed me to download the full TeX source file corpus for <$5. Next, I used the TexSoup Python package to process the TeX files and produce a list of LaTeX commands used in the caption environment. I then used a final script to search for papers that used command names that referenced colors or shapes to compile a list of likely paper candidates and produced HTML files for each year containing a link to the PDF for each candidate paper as well as the full TeX source for the identified caption, with the matching commands highlighted. Finally, I manually verified the papers using the HTML files that were produced. Except for trivial false positivies, which could be identified by looking at the included caption source, I manually looked at the PDF for each candidate paper, verified that it included a visual caption indicator, and classified the caption indicator if it had one. For papers that included indicators, I then attempted to locate the published version of record of the paper and did the same for it.

Through this process, my scripts located around ~5100 paper candidates from the beginning of arXiv in 1992 through the end of June 2020. I manually verified these candidates for papers submitted prior to the end of 2016; these accounted for ~2000 candidates, of which I verified ~1100 papers to have some sort of visual caption indicator. For ~700 of these, I was able to verify the presence of some form of visual caption indicator in the published version of record. Of these, ~60% included a black shape or line indicator, ~25% included a color shape or line indicator, and the remainder included colored text. The fraction of papers with color shape or line indicators was higher in the pre-prints, since it was not uncommon for the published version to include a black indicator when the pre-print included a colored indicator. I stopped at the end of 2016 since the verification process was quite time consuming, and I could only look at so many papers before giving up.

These findings show that the idea of using figure color caption indicators is by no means a new idea. However, it’s still quite rare in relative terms, since at most a couple thousand out of arXiv’s ~1.7 million pre-prints include such indicators. Most of the examples I found used a colored shape () or line () in parentheses, or both in cases where both a line and marker were used. My proposal to use a colored underline does still appear to have been a novel concept, but it proved quite complicated to implement, so using shapes or lines in parentheses is much more practical, since it is simpler and is evidentially compatible with many publishers’ workflows. Furthermore, the existing examples can be used as evidence when complaining about paper proofs, after the typesetter predictably removes the indicators, to show that the indicators are possible and that they can and should be included in the final published version of the paper.

One color indicator that I recommend against using is colored text, since it can be difficult to read and often violates WCAG contrast guidelines. Its use seems particularly common in the computer vision literature and, to a lesser degree, the machine learning literature. It is often used to highlight table entries, a purpose much better served by using italic, bold, or bold–italic text.

I have made the scripts used for this analysis, the paper candidates, and the final verified results available. The final verified results are also available separately for easy viewing. Note that the verified results are incomplete and may contain errors.

]]>
https://mpetroff.net/2020/10/update-on-figure-caption-color-indicators/feed/ 0
Pre-calculated line breaks for HTML / CSS https://mpetroff.net/2020/05/pre-calculated-line-breaks-for-html-css/ https://mpetroff.net/2020/05/pre-calculated-line-breaks-for-html-css/#respond Mon, 25 May 2020 16:12:05 +0000 https://mpetroff.net/?p=3225 Continue reading ]]> Although slowly improving, typography on the web pages is considerably lower quality than that of high-quality print / PDF typography, such as that produced by LaTeX or Adobe InDesign. In particular, line breaks and hyphenation need considerable improvement. While CSS originally never specified what sort of line breaking algorithm should be used, browsers all converged on greedy line breaking, which produces poor-quality typography but is fast, simple, and stable. CSS Text Module Level 4 standardizes the current behavior as the default with a text-wrap property while introducing a pretty option, which instructs the browser to use a higher quality line breaking algorithm. However, as of the time of writing, no browsers supported this property.

I recently came across a CSS library for emulating LaTeX’s default appearance.1 However, it doesn’t emulate the Knuth–Plass line breaking algorithm, which is one of the things that makes LaTeX look good. This got me wondering whether or not it’s possible to emulate this with plain HTML and CSS. A JavaScript library already exists to emulate this, but it adds extra complexity and is a bit slow. It turns out that it is possible to pre-calculate line breaks and hyphenation for specific column widths in a manner that can be encoded in HTML and CSS, as long as web fonts are used to standardize the text appearance across various browsers.

The key is to wrap all the potential line breaks (inserted via ::after pseudo-elements) and hyphens in <span> elements that are hidden by default with display: none;. Media queries are then used to selectively show the line breaks specific to a given column width. Since every line has an explicit line break, justification needs to be enabled using text-align-last: justify;, and word-spacing: -10px; is used to avoid additional automatic line breaks due to slight formatting differences between browsers. However, this presents a problem for the actual last line of each paragraph, since it is now also justified instead of left aligned. This is solved by wrapping each possible last line in a <span> element. Using media queries, the <span> element corresponding to the given column width is set to use display: flex;, which makes the content be left-aligned and take up the minimum space required, thereby undoing the justification; word-spacing: 0; is also set to undo the previous change to it and fix the word spacing. Unfortunately, the nested <span> elements are problematic, because there are no spaces between them; this is fixed by including a space in the HTML markup at the beginning of the <span> and setting white-space: pre; to force the space to appear.

I’ve prepared a demo page demonstrating this technique. It was constructed by calculating line breaks in Firefox 76 using the tex-linebreak bookmarklet and manually inserting the markup corresponding to the line breaks; some fixes were manually made because the library does not properly support em dashes. Line breaks were calculated for column widths between 250 px and 500 px at 50 px increments. The Knuth–Plass line breaks lead to a considerable improvement in the text appearance, particularly for narrower column widths. In addition to the improved line breaks, I also implemented protrusion of hyphens, periods, and commas into the right margin, a microtypography technique, which further improves the appearance. To (hopefully) avoid issues with screen readers, aria-hidden="true" is set on the added markup; user-select: none; is also set, to avoid issues with text copying.

While this technique works fine in Firefox and Chrome, it does not work in Safari, since Safari does not support text-align-last as of Safari 13.2 Despite it not working, the corresponding WebKit bug is marked as “resolved fixed”; it seems that support was actually added in 2014, but the support is behind the CSS3_TEXT compile-time flag, which is disabled by default. Thus, I devised an alternative method that used invisible 100% width elements to force line breaks without using explicit line breaks. This again worked in Firefox and Chrome, although it caused minor issues with text selection, but it again had significant issues in Safari. It appears that Safari does not properly handle justified text with negative word spacing; relaxing the word spacing, however, causes extra line breaks due to formatting differences, which breaks the technique. At this point, I gave up on supporting Safari and just set it to use the browser default line breaking by placing the technique’s CSS behind an @supports query for text-align-last: justify.

Automated creation of the markup would be necessary to make this technique more generally useful, but the demo page serves as a proof of concept. Ideally, browsers would implement an improved line breaking algorithm, which would make this technique obsolete.


  1. Also see corresponding Hacker News discussion.  

  2. Even Internet Explorer 6 supports this.  

]]>
https://mpetroff.net/2020/05/pre-calculated-line-breaks-for-html-css/feed/ 0