[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Another approach to improve w3m-doenload
On 2019-03-22 17:17, Katsumi Yamaoka wrote:
> One way to make it fast would be to download data directly to
> a file, not a buffer.
Exactly so! The reason that I suddenly was motivated to write what I did
was because I suffered a crash when downloading several huge files using
the w3m version, probably because it was temporarily saving to memory,
instead of directly to a file.
> There is neither a cache nor a progress indicator, but who is
> bothered?
For large downloads, I appreciate a progress indicator, because my
internet connection is slow and often very slow.
> Even if download fails, there should be no way to help it (if it is
> not due to an emacs-w3m bug) other than retrying it.
On this point wget and other downloaders (eg. curl, aria2c, axel) are
superior, because they allow resumption of aborted downloads without
restarting from scratch.
> `w3m -dump_extra' issues too much amount of progress indicators
One issue may be that the project is using a very unusual method for
dealing with asynchronous processes, that was maybe state-of-art when it
was written. I once attempted a full refactor, and would like to try
again. When writing the wget-download code, I found it was much simpler
to start from scratch using the straightforward tools in the current
emacs documentation.
--
hkp://keys.gnupg.net
CA45 09B5 5351 7C11 A9D1 7286 0036 9E45 1595 8BC0