We have at least 290(!) logged bugfixes queued up for the #curl release coming in four days. That's more than 6 bugs squashed per day on average during this release cycle.
Just imagine how many bugs we must be adding!
We have at least 290(!) logged bugfixes queued up for the #curl release coming in four days. That's more than 6 bugs squashed per day on average during this release cycle.
Just imagine how many bugs we must be adding!
You can help #curl by testing this final release candidate, rc3, before the real release happens next week:
Two years ago we introduced the #libcurl header API, which also made it easier to extract headers with the #curl tool:
https://daniel.haxx.se/blog/2022/03/24/easier-header-picking-with-curl/
On this day twenty-seven years ago, I released the first #curl version. I called it 4.0 as I kept the versioning from the previous names.
a recent change in #curl makes it not accept the HTTP/1 response as valid without that space present...
on this day, only twenty-five years ago, we shipped #curl 6.5 which introduced the fancy -w option
We got another "critical vulnerability" on #curl reported. I figured you might enjoy it.
"The authentication mechanism in cURL does not properly restrict the number of failed authentication attempts, allowing an attacker to brute-force credentials"
Yawn. Away, away you go.
Remember: when you run #curl shipped by Apple with the --cacert flag it won't behave like #curl does everywhere else. As I wrote about last year. I think they're doing it wrong. They think its fine.
https://daniel.haxx.se/blog/2024/03/08/the-apple-curl-security-incident-12604/
There was a question posed on the #curl IRC channel whether there's ever going to be a need to raise addressing or offsets from 64-bit to something larger, such as 128-bit.
I argue there is no need to do this. 64-bit can already address a very large amount of data. For example, many operating systems and filesystems have a limit of 2**64 for file sizes. But it is difficult to wrap your head around this; how much data can such a file really hold?
Some estimates (*) say that there's going to be around 181 ZB (zettabytes) of data in the world by the end of 2025.
This is only 9812 files if each file holds 2**64 bytes.
*) https://rivery.io/blog/big-data-statistics-how-much-data-is-there-in-the-world/
curl -sS 'https://ggwave-to-file.ggerganov.com/?m=Hello%20world!' --output hello.wav #ggwave #fun #curl
I'm sensing strong renewed anti-GitHub sentiments among my (non-US based) peers these days as the US is seemingly in a free-fall towards chaos.
We will of course keep prioritizing security and safety for the #curl project and its contributors and will act immediately if the signs tell us we should.
Ten years ago on this day we went full GitHub model in #curl: pull-request style development. We have since handled over 10,700 PRs in an increasing amount of activity.
https://daniel.haxx.se/blog/2015/03/03/curl-embracing-github-more/
Today is also two years since "the nuget story" where I struggled to get a ten year old and vulnerable #curl version delisted:
https://daniel.haxx.se/blog/2023/03/02/the-curl-nuget-story/
@kajer @lo__ @mrmasterkeyboard because @torproject / #TorBrowser already does an excellent job and anything that doesn't work can,be done with @dillo / #dillo, #LynxBrowser, #ytdlp and #curl!
I know it is often repeated, but #curl is not a one man factory:
Join me for a small presentation next week on what we want for #curl in 2025...
https://daniel.haxx.se/blog/2025/02/25/the-curl-roadmap-webinar-2025/