<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<div class="moz-cite-prefix">On 5/5/25 12:58, Greg Hellings wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAHxvOVLZ93f-EmnAgyR3KagaqETPS7n6R-JcpyfngADCxJH=tA@mail.gmail.com">zVerse
is limited to a 65,535 byte cap per block... zText4 uses a 32-bit
value</blockquote>
<br>
<font face="FreeSerif">Kinda serious question:<br>
<br>
In a world of Gbit net.links, memory size typically in tens of
Gbytes, and multi-Tbyte storage, does module compression make any
sense today?<br>
<br>
- Nobody is using PDP-11s any more, and we're not struggling to
transfer data over sloppy, error-prone 56kbps links.<br>
- Tiny handheld devices (i.e. smartphones) have 5G network access,
64-bit addressing, and -- minimally -- dozens of Gbytes of
storage.<br>
<br>
My main environment has 900 modules installed (because sooner or
later I have to experiment with darn near everything, and it
accretes), whose total space occupation is just 6.6Gbytes. This is
not consequential storage today.<br>
<br>
</font><font face="monospace">du -sb .sword<br>
6659535239 /home/karl/.sword<br>
ls .sword/mods.d/*.conf | wc -l<br>
898<br>
df -Th .<br>
Filesystem Type Size Used Avail Use% Mounted on<br>
/dev/nvme1n1p7 ext4 702G 521G 181G 75% /home<br>
</font><font face="FreeSerif"><br>
Is compression's space savings actually worth it? Is the reduced
I/O of reading a compressed module made up in increased complexity
of handling?<br>
<br>
The biggest text module currently being distributed (BSB,
compressed) is <27Mbytes. Everything else is smaller than that.<br>
<br>
Sword could be reimplemented to use mmap() to inhale entire
uncompressed bibles into virtual memory in half a microsecond
without causing the slightest grief, rather than spend time
managing decompression needs.<br>
<br>
Maybe let the VM system do the work instead. I sense that
compression handling has become an anachronism.<br>
</font>
</body>
</html>