In https://github.com/algesten/ureq/pull/67#issuecomment-647112883,
Shnatsel points out that there are uses for read_timeout vs timeout. For
instance, when downloading a large file it's useful to have a read
timouet in case the download stalls, but it would be hard to pick a good
overall timeout without knowing the size of the file and the bandwidth
of the connection.
* Update documentation.
Synchronize goals section between README and rustdoc, and add two goals
(blocking API; no unsafe) that were mentioned elsewhere in the README.
Add error handling to examples for the module and for the Response
object.
And a section on synthetic errors to the top-level module documentation.
* Add back missing close brace.
* Add main function that returns Result.
* Add links to send_bytes and send_form.
* Document chunked encoding for send.
* Use a larger vec of bytes for send.
Adds some feature guards, and removes an unnecessary feature guard
around a call to connect_https (there's an implementation available for
non-TLS that returns UnknownScheme).
Also, remove unnecessary agent.state() method that was only available in
TLS builds. The state field is directly accessible within the crate, and
can be used in both TLS and non-TLS builds.
Co-authored-by: Martin Algesten <martin@algesten.se>
This deprecates timeout_read() and timeout_write() in favor of
timeout(). The new timeout method on Request takes a Duration instead
of a number of milliseconds, and is measured against overall request
time, not per-read time.
Once a request is started, the timeout is turned into a deadline
specific to that call. The deadline is used in conjunction with the
new DeadlineStream class, which sets a timeout on each read according
to the remaining time for the request. Once the request is done,
the DeadlineStream is unwrapped via .into::<Stream>() to become
an undecorated Stream again for return to the pool. Timeouts on the
stream are unset at this point.
Still to be done:
Add a setting on Agent for default timeout.
Change header-writing code to apply overall deadline rather than
per-write timeout.
Fixes#28.
Fix up cfg attributes to work on an xor basis.
Previously, the cfg(any()) attributes would cause issues when
both native-tls and tls features were enabled. Now, https functions
and enum variants will only be created when tls xor native-tls are
enabled. Additionally, a compile error has been added for when
both tls and native-tls features are enabled.
This test was making requests to a server on the Internet. That has the
potential to make the test flaky. Also, the test was relying on a
specific behavior from that server (timing out after 2s), which it no
longer exhibits.
This sets up a local test server that exhibits the specific properties
needed for this test.
This builds on 753d61b. Before we send a request, we can do a 1-byte
nonblocking peek on the connection. If the server has closed the
connection, this will give us an EOF, and we can take the connection out
of the pool before sending any request on it. This will reduce the
likelihood that we send a non-retryable POST on an already-closed
connection.
The server could still potentially close the connection between when we
make this check and when we finish sending the request, but this should
handle the majority of cases.
If DNS resolves to multiple IPs but the service is only running on one
of them and it isn't teh first IP, a connection will fail.
This was detected via running vault that would only bind to IPv4 but
localhost was returning ::1 followed by 127.0.0.1.
After this fix, the service connects without problem.
This removes the necessity to take the result of Response::into_json and
having to convert it into a struct by using serde_json::from_value
This adds no new dependencies since serde_json already depends on serde.
Users of ureq will have to include `serde_derive` either by importing it
directly or by using serde with the `derive` feature, unless they want to
manually implement `Deserialize` on their structs.
This implementation improves over chunked_transfer's Encoder + io::copy
with the following performance optimizations:
1) It avoid copying memory
2) chunked_transfer's Encoder issues 4 separate write() per chunk.
This is costly overhead. Instead, we do a single write() per chunk
The measured benefit on a Linux machine is a 50% reduction in CPU usage
on a https connection.
When sending a JSON request a Content-Type of application/json is
usually wanted. This is often set as a default for JSON methods by HTTP
clients so can be confusing when it is not set. However, we do not want
to prevent the user from setting their own Content-Type.