Fix up cfg attributes to work on an xor basis.
Previously, the cfg(any()) attributes would cause issues when
both native-tls and tls features were enabled. Now, https functions
and enum variants will only be created when tls xor native-tls are
enabled. Additionally, a compile error has been added for when
both tls and native-tls features are enabled.
This test was making requests to a server on the Internet. That has the
potential to make the test flaky. Also, the test was relying on a
specific behavior from that server (timing out after 2s), which it no
longer exhibits.
This sets up a local test server that exhibits the specific properties
needed for this test.
This builds on 753d61b. Before we send a request, we can do a 1-byte
nonblocking peek on the connection. If the server has closed the
connection, this will give us an EOF, and we can take the connection out
of the pool before sending any request on it. This will reduce the
likelihood that we send a non-retryable POST on an already-closed
connection.
The server could still potentially close the connection between when we
make this check and when we finish sending the request, but this should
handle the majority of cases.
If DNS resolves to multiple IPs but the service is only running on one
of them and it isn't teh first IP, a connection will fail.
This was detected via running vault that would only bind to IPv4 but
localhost was returning ::1 followed by 127.0.0.1.
After this fix, the service connects without problem.
This removes the necessity to take the result of Response::into_json and
having to convert it into a struct by using serde_json::from_value
This adds no new dependencies since serde_json already depends on serde.
Users of ureq will have to include `serde_derive` either by importing it
directly or by using serde with the `derive` feature, unless they want to
manually implement `Deserialize` on their structs.
This implementation improves over chunked_transfer's Encoder + io::copy
with the following performance optimizations:
1) It avoid copying memory
2) chunked_transfer's Encoder issues 4 separate write() per chunk.
This is costly overhead. Instead, we do a single write() per chunk
The measured benefit on a Linux machine is a 50% reduction in CPU usage
on a https connection.
When sending a JSON request a Content-Type of application/json is
usually wanted. This is often set as a default for JSON methods by HTTP
clients so can be confusing when it is not set. However, we do not want
to prevent the user from setting their own Content-Type.
This is not a perfect solution. It works as long as we are not sending
any body bytes. We discover the error first when attempting to read
the response status line. That means we discover the error after
sending body bytes. To be able to re-send the body, we would need to
introduce a buffer to be able to replay the body on the next
request. We don't currently do that.