* Remove Request::build
* All mutations on Request follow builder pattern
The previous `build()` on request was necessary because mutating
functions did not follow a proper builder pattern (taking `&mut self`
instead of `mut self`). With a proper builder pattern, the need for
`.build()` goes away.
* All Request body and call methods consume self
Anything which "executes" the request will now consume the `Request`
to produce a `Result<Response>`.
* Move all config from request to agent builder
Timeouts, redirect config, proxy settings and TLS config are now on
`AgentBuilder`.
* Rename max_pool_connections -> max_idle_connections
* Rename max_pool_connections_per_host -> max_idle_connections_per_host
Consistent internal and external naming.
* Introduce new AgentConfig for static config created by builder.
`Agent` can be seen as having two parts. Static config and a mutable
shared state between all states. The static config goes into
`AgentConfig` and the mutable shared state into `AgentState`.
* Replace all use of `Default` for `new`.
Deriving or implementing `Default` makes for a secondary instantiation
API. It is useful in some cases, but gets very confusing when there
is both `new` _and_ a `Default`. It's especially devious for derived
values where a reasonable default is not `0`, `false` or `None`.
* Remove feature native_tls, we want only native rustls.
This feature made for very clunky handling throughout the code. From a
security point of view, it's better to stick with one single TLS API.
Rustls recently got an official audit (very positive).
https://github.com/ctz/rustls/tree/master/audit
Rustls deliberately omits support for older, insecure TLS such as TLS
1.1 or RC4. This might be a problem for a user of ureq, but on balance
not considered important enough to keep native_tls.
* Remove auth and support for basic auth.
The API just wasn't enough. A future reintroduction should at least
also provide a `Bearer` mechanism and possibly more.
* Rename jar -> cookie_store
* Rename jar -> cookie_tin
Just make some field names sync up with the type.
* Drop "cookies" as default feature
The need for handling cookies is probably rare, let's not enable it by
default.
* Change all feature checks for "cookie" to "cookies"
The outward facing feature is "cookies" and I think it's better form
that the code uses the official feature name instead of the optional
library "cookies".
* Keep `set` on Agent level as well as AgentBuilder.
The idea is that an auth exchange might result in a header that need
to be set _after_ the agent has been built.
This feature was broken in #67, which reset timeouts on the
stream before passing it to set_stream.
As part of this change, refactor the internal storage of
timeouts on the Request object to use Option<Duration>.
Remove the deadline field on Response. It wasn't used. The
deadline field on unit was used instead.
Add a unittest.
Previously, Agent stored most of its state in one big
Arc<Mutex<AgentState>>. This separates the Arc from the Mutexes.
Now, Agent is a thin wrapper around an Arc<AgentState>. The individual
components that need locking, ConnectionPool and CookieStore, now are
responsible for their own locking.
There were a couple of reasons for this. Internal components that needed
an Agent were often instead carrying around an Arc<Mutex<AgentState>>.
This felt like the components were too intertwined: those other
components shouldn't have to care quite so much about how Agent is
implemented. Also, this led to compromises of convenience: the Proxy on
Agent wound up stored inside the `Arc<Mutex<AgentState>>` even though it
didn't need locking. It was more convenient that way because that was
what Request and Unit had access too.
The other reason to push things down like this is that it can reduce
lock contention. Mutations to the cookie store don't need to lock the
connection pool, and vice versa. This was a secondary concern, since I
haven't actually profiled these things and found them to be a problem,
but it's a happy result of the refactoring.
Now all the components outside of Agent take an Agent instead of
AgentState.
In the process I removed `Agent.cookie()`. Its API was hard to use
correctly, since it didn't distinguish between cookies on different
hosts. And it would have required updates as part of this refactoring.
I'm open to reinstating some similar functionality with a refreshed API.
I kept `Agent.set_cookie`, but updated its method signature to take a
URL as well as a cookie.
Many of ConnectionPool's methods went from `&mut self` to `&self`,
because ConnectionPool is now using interior mutability.
In the process, rename set_foo methods to just foo, since methods on the
builder will always be setters.
Adds a new() method on ConnectionPool so it can be constructed directly
with the desired limits. Removes the setter methods on ConnectionPool
for those limits. This means that connection limits can only be set when
an Agent is built.
There were two tests that verify Send and Sync implementations, one for
Agent and one for Request. This PR moves the Request test to request.rs,
and changes both tests to more directly verify the traits. There may be
another way to do this, I'm not sure.
This adds validation of header values on receive, and of both header
names and header values on send. This doesn't change the return
type of set to be a Result, it just validates when the request is
sent. Also removes the section in the README describing handling
of invalid headers, and updates a test that verified acceptance of
non-ASCII headers so that it verifies rejection of them instead.
Gets rid of synthetic_error, and makes the various send_* methods return `Result<Response, Error>`.
Introduces a new error type "HTTP", which represents an error due to status codes 4xx or 5xx.
The HTTP error type contains a boxed Response, so users can read the actual response if they want.
Adds an `error_for_status` setting to disable the functionality of treating 4xx and 5xx as errors.
Adds .unwrap() to a lot of tests.
Fixes#128.
In a few places we relied on "localhost" as a default if a URL's host
was not set, but I think it's better to error out in these cases.
In general, there are a few places in Unit that assumed there is a
host as part of the URL. I've made that explicit by doing a check
at the beginning of `connect()`. I've also tried to plumb through
the semantics of "host is always present" by changing the parameter
types of some of the functions that use the hostname.
I considered a more thorough way to express this with types - for
instance implementing an `HttpUrl` struct that embeds a `Url`, and
exports most of the same methods, but guarantees that host is always
present. However, that was more invasive than this so I did a smaller
change to start.
Previously, using `.send()` on `Request` would require to set either the
Transfer-Encoding or the Content-Length header.
In an effort to provide better ergonomics for this library and to avoid
making users fall in a not-so obvious pitfall, the library should build a
valid Request without asking the user to mess around with the headers.
This commit attempts to fix this issue: if the user use `.send()` to
provide an unknown sized reader, the chunked Transfer-Encoding will be
used. Of course, there are prior checks to ensure we do not override the
user wish, like if the user already set a Content-Length or a
Transfer-Encoding.
This commit fix an edge case where the Content-Length would not be
automatically set if the Transfer-Encoding is set but not chunked.
`Some(AgentState)` seem to be assumed pretty much everywhere. I could not
find any test or piece of code hinting at how `None` should be interpreted,
or even see how a state of `None` could even be constructed.
Co-authored-by: Ulrik <ulrikm@spotify.com>
This also reverts a change to send_body that was originally added to
return the number of bytes written. It's no longer needed now that we
check the size of the reader in advance.
Fixes#76.
In https://github.com/algesten/ureq/pull/67#issuecomment-647112883,
Shnatsel points out that there are uses for read_timeout vs timeout. For
instance, when downloading a large file it's useful to have a read
timouet in case the download stalls, but it would be hard to pick a good
overall timeout without knowing the size of the file and the bandwidth
of the connection.
* Update documentation.
Synchronize goals section between README and rustdoc, and add two goals
(blocking API; no unsafe) that were mentioned elsewhere in the README.
Add error handling to examples for the module and for the Response
object.
And a section on synthetic errors to the top-level module documentation.
* Add back missing close brace.
* Add main function that returns Result.
* Add links to send_bytes and send_form.
* Document chunked encoding for send.
* Use a larger vec of bytes for send.
This deprecates timeout_read() and timeout_write() in favor of
timeout(). The new timeout method on Request takes a Duration instead
of a number of milliseconds, and is measured against overall request
time, not per-read time.
Once a request is started, the timeout is turned into a deadline
specific to that call. The deadline is used in conjunction with the
new DeadlineStream class, which sets a timeout on each read according
to the remaining time for the request. Once the request is done,
the DeadlineStream is unwrapped via .into::<Stream>() to become
an undecorated Stream again for return to the pool. Timeouts on the
stream are unset at this point.
Still to be done:
Add a setting on Agent for default timeout.
Change header-writing code to apply overall deadline rather than
per-write timeout.
Fixes#28.
When sending a JSON request a Content-Type of application/json is
usually wanted. This is often set as a default for JSON methods by HTTP
clients so can be confusing when it is not set. However, we do not want
to prevent the user from setting their own Content-Type.