Merge branch 'release-2.0' into json_deserialize

This commit is contained in:
Jacob Hoffman-Andrews
2020-11-21 16:25:14 -08:00
17 changed files with 630 additions and 271 deletions

21
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,21 @@
## License
Copyright (c) 2019 Martin Algesten
Licensed under either of
* Apache License, Version 2.0
([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)
* MIT license
([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)
at your option.
## Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in the work by you, as defined in the Apache-2.0 license, shall be
dual licensed under the Apache License, Version 2.0, and the MIT license, without
any additional terms or conditions. See LICENSE-APACHE and LICENSE-MIT for
details.

View File

@@ -2,11 +2,11 @@
name = "ureq" name = "ureq"
version = "1.5.1" version = "1.5.1"
authors = ["Martin Algesten <martin@algesten.se>", "Jacob Hoffman-Andrews <ureq@hoffman-andrews.com>"] authors = ["Martin Algesten <martin@algesten.se>", "Jacob Hoffman-Andrews <ureq@hoffman-andrews.com>"]
description = "Minimal HTTP request library" description = "Simple, safe HTTP client"
license = "MIT/Apache-2.0" license = "MIT/Apache-2.0"
repository = "https://github.com/algesten/ureq" repository = "https://github.com/algesten/ureq"
readme = "README.md" readme = "README.md"
keywords = ["web", "request", "http", "rest", "client"] keywords = ["web", "request", "rest", "https", "http", "client"]
categories = ["web-programming::http-client"] categories = ["web-programming::http-client"]
edition = "2018" edition = "2018"

264
README.md
View File

@@ -1,115 +1,195 @@
[comment]: # (README.md is autogenerated from src/lib.rs by `cargo readme > README.md`)
# ureq # ureq
![](https://github.com/algesten/ureq/workflows/CI/badge.svg) A simple, safe HTTP client.
[![CratesIO](https://img.shields.io/crates/v/ureq.svg)](https://crates.io/crates/ureq)
[![Documentation](https://docs.rs/ureq/badge.svg)](https://docs.rs/ureq)
> Minimal request library in rust. Ureq's first priority is being easy for you to use. It's great for
anyone who wants a low-overhead HTTP client that just gets the job done. Works
very well with HTTP APIs. Its features include cookies, JSON, HTTP proxies,
HTTPS, and charset decoding.
## Usage Ureq is in pure Rust for safety and ease of understanding. It avoids using
`unsafe` directly. It [uses blocking I/O][blocking] instead of async I/O, because that keeps
the API simple and and keeps dependencies to a minimum. For TLS, ureq uses
[rustls].
[blocking]: #blocking-i-o-for-simplicity
### Usage
In its simplest form, ureq looks like this:
```rust ```rust
// sync post request of some json. let body: String = ureq::get("http://example.com")
// requires feature: .set("Accept", "text/html")
// `ureq = { version = "*", features = ["json"] }` .call()?
let resp = ureq::post("https://myapi.example.com/ingest") .into_string()?;
```
For more involved tasks, you'll want to create an [Agent]. An Agent
holds a connection pool for reuse, and a cookie store if you use the
"cookies" feature. An Agent can be cheaply cloned due to an internal
[Arc](std::sync::Arc) and all clones of an Agent share state among each other. Creating
an Agent also allows setting options like the TLS configuration.
```rust
use ureq::{Agent, AgentBuilder};
use std::time::Duration;
let agent: Agent = ureq::AgentBuilder::new()
.timeout_read(Duration::from_secs(5))
.timeout_write(Duration::from_secs(5))
.build();
let body: String = agent.get("http://example.com/page")
.call()?
.into_string()?;
// Reuses the connection from previous request.
let response: String = agent.put("http://example.com/upload")
.set("Authorization", "example-token")
.call()?
.into_string()?;
```
Ureq supports sending and receiving json, if you enable the "json" feature:
```rust
// Requires the `json` feature enabled.
let resp: String = ureq::post("http://myapi.example.com/ingest")
.set("X-My-Header", "Secret") .set("X-My-Header", "Secret")
.send_json(serde_json::json!({ .send_json(ureq::json!({
"name": "martin", "name": "martin",
"rust": true "rust": true
}))?; }))?
.into_string()?;
// .ok() tells if response is 200-299.
if resp.ok() {
println!("success: {}", resp.into_string()?);
} else {
println!("error {}: {}", resp.status(), resp.into_string()?);
}
``` ```
## About 1.0.0 ### Features
This crate is now 1.x.x. It signifies there will be no more breaking
API changes (for better or worse). I personally use this code in
production system reading data from AWS. Whether the quality is good
enough for other use cases is a "YMMV".
## ureq's future
I asked for feedback on [ureq's future
direction](https://www.reddit.com/r/rust/comments/eu6qg8/future_of_ureq_http_client_library/)
and came to the conclusion that there's enough interest in a simple
blocking http client to keep it going. Another motivation is that I
use it extensively for my own work, to talk to S3.
I'll keep maintaining ureq. I will try to keep dependencies somewhat
fresh and try to address bad bugs. I will however not personally
implement new features in ureq, but I do welcome PR with open arms.
The code base is extremely simple, one might even call naive. It's a
good project to hack on as first learning experience in Rust. I will
uphold some base line of code hygiene, but won't block a PR due to
something being a bit inelegant.
## Features
To enable a minimal dependency tree, some features are off by default. To enable a minimal dependency tree, some features are off by default.
You can control them when including `ureq` as a dependency. You can control them when including ureq as a dependency.
``` `ureq = { version = "*", features = ["json", "charset"] }`
ureq = { version = "*", features = ["json", "charset"] }
```
* `tls` enables https. This is enabled by default. * `tls` enables https. This is enabled by default.
* `cookies` enables handling cookies between requests in an agent. * `cookies` enables cookies.
* `json` enables `response.into_json()` and `request.send_json()` via serde_json. * `json` enables [Response::into_json()] and [Request::send_json()] via serde_json.
* `charset` enables interpreting the charset part of * `charset` enables interpreting the charset part of the Content-Type header
`Content-Type: text/plain; charset=iso-8859-1`. Without this, the library (e.g. `Content-Type: text/plain; charset=iso-8859-1`). Without this, the
defaults to rust's built in `utf-8`. library defaults to Rust's built in `utf-8`.
## Motivation ## Plain requests
* Minimal dependency tree Most standard methods (GET, POST, PUT etc), are supported as functions from the
* Obvious API top of the library ([get()], [post()], [put()], etc).
* Blocking API
* Convenience over correctness
* No use of unsafe
This library tries to provide a convenient request library with a minimal dependency These top level http method functions create a [Request] instance
tree and an obvious API. It is inspired by libraries like which follows a build pattern. The builders are finished using:
* [`.call()`][Request::call()] without a request body.
* [`.send()`][Request::send()] with a request body as [Read][std::io::Read] (chunked encoding support for non-known sized readers).
* [`.send_string()`][Request::send_string()] body as string.
* [`.send_bytes()`][Request::send_bytes()] body as bytes.
* [`.send_form()`][Request::send_form()] key-value pairs as application/x-www-form-urlencoded.
## JSON
By enabling the `ureq = { version = "*", features = ["json"] }` feature,
the library supports serde json.
* [`request.send_json()`][Request::send_json()] send body as serde json.
* [`response.into_json()`][Response::into_json()] transform response to json.
## Content-Length and Transfer-Encoding
The library will send a Content-Length header on requests with bodies of
known size, in other words, those sent with
[`.send_string()`][Request::send_string()],
[`.send_bytes()`][Request::send_bytes()],
[`.send_form()`][Request::send_form()], or
[`.send_json()`][Request::send_json()]. If you send a
request body with [`.send()`][Request::send()],
which takes a [Read][std::io::Read] of unknown size, ureq will send Transfer-Encoding:
chunked, and encode the body accordingly. Bodyless requests
(GETs and HEADs) are sent with [`.call()`][Request::call()]
and ureq adds neither a Content-Length nor a Transfer-Encoding header.
If you set your own Content-Length or Transfer-Encoding header before
sending the body, ureq will respect that header by not overriding it,
and by encoding the body or not, as indicated by the headers you set.
```rust
let resp = ureq::post("http://my-server.com/ingest")
.set("Transfer-Encoding", "chunked")
.send_string("Hello world");
```
## Character encoding
By enabling the `ureq = { version = "*", features = ["charset"] }` feature,
the library supports sending/receiving other character sets than `utf-8`.
For [`response.into_string()`][Response::into_string()] we read the
header `Content-Type: text/plain; charset=iso-8859-1` and if it contains a charset
specification, we try to decode the body using that encoding. In the absence of, or failing
to interpret the charset, we fall back on `utf-8`.
Similarly when using [`request.send_string()`][Request::send_string()],
we first check if the user has set a `; charset=<whatwg charset>` and attempt
to encode the request body using that.
## Blocking I/O for simplicity
Rust supports [asynchronous (async) I/O][async], but ureq does not use it. Async I/O
allows serving many concurrent requests without high costs in memory and OS threads. But
it comes at a cost in complexity. Async programs need to pull in a runtime (usually
[async-std] or [tokio]). They also need async variants of any method that might block, and of
[any method that might call another method that might block][what-color]. That means async
programs usually have a lot of dependencies - which adds to compile times, and increases
risk.
The costs of async are worth paying, if you're writing an HTTP server that must serve
many many clients with minimal overhead. However, for HTTP _clients_, we believe that the
cost is usually not worth paying. The low-cost alternative to async I/O is blocking I/O,
which has a different price: it requires an OS thread per concurrent request. However,
that price is usually not high: most HTTP clients make requests sequentially, or with
low concurrency.
That's why ureq uses blocking I/O and plans to stay that way. Other HTTP clients offer both
an async API and a blocking API, but we want to offer a blocking API without pulling in all
the dependencies required by an async API.
[async]: https://rust-lang.github.io/async-book/01_getting_started/02_why_async.html
[async-std]: https://github.com/async-rs/async-std#async-std
[tokio]: https://github.com/tokio-rs/tokio#tokio
[what-color]: https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/
------------------------------------------------------------------------------
Ureq is inspired by other great HTTP clients like
[superagent](http://visionmedia.github.io/superagent/) and [superagent](http://visionmedia.github.io/superagent/) and
[fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API). [the fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API).
### Sync forever If ureq is not what you're looking for, check out these other Rust HTTP clients:
[surf](https://crates.io/crates/surf), [reqwest](https://crates.io/crates/reqwest),
[isahc](https://crates.io/crates/isahc), [attohttpc](https://crates.io/crates/attohttpc),
[actix-web](https://crates.io/crates/actix-web), and [hyper](https://crates.io/crates/hyper).
This library uses blocking socket reads and writes. When it was
created, there wasn't any async/await support in rust, and for my own
purposes, blocking IO was fine. At this point, one good reason to keep
this library going is that it is blocking (the other is that it does not
use unsafe).
## TODO [rustls]: https://docs.rs/rustls/
[std::sync::Arc]: https://doc.rust-lang.org/stable/alloc/sync/struct.Arc.html
- [ ] Forms with application/x-www-form-urlencoded [std::io::Read]: https://doc.rust-lang.org/stable/std/io/trait.Read.html
- [ ] multipart/form-data [Agent]: https://docs.rs/ureq/latest/ureq/struct.Agent.html
- [ ] Expect 100-continue [get()]: https://docs.rs/ureq/latest/ureq/fn.get.html
- [x] Use `rustls` when [ring with versioned asm symbols](https://github.com/briansmith/ring/pull/619) is released. (PR is not resolved, but most implementations have settled on 0.13) [post()]: https://docs.rs/ureq/latest/ureq/fn.post.html
[put()]: https://docs.rs/ureq/latest/ureq/fn.put.html
## License [Request]: https://docs.rs/ureq/latest/ureq/struct.Request.html
[Request::call()]: https://docs.rs/ureq/latest/ureq/struct.Request.html#method.call
Copyright (c) 2019 Martin Algesten [Request::send()]: https://docs.rs/ureq/latest/ureq/struct.Request.html#method.send
[Request::send_bytes()]: https://docs.rs/ureq/latest/ureq/struct.Request.html#method.send_bytes
Licensed under either of [Request::send_string()]: https://docs.rs/ureq/latest/ureq/struct.Request.html#method.send_string
[Request::send_json()]: https://docs.rs/ureq/latest/ureq/struct.Request.html#method.send_json
* Apache License, Version 2.0 [Request::send_form()]: https://docs.rs/ureq/latest/ureq/struct.Request.html#method.send_form
([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0) [Response::into_json()]: https://docs.rs/ureq/latest/ureq/struct.Response.html#method.into_json
* MIT license [Response::into_string()]: https://docs.rs/ureq/latest/ureq/struct.Response.html#method.into_string
([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)
at your option.
## Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in the work by you, as defined in the Apache-2.0 license, shall be
dual licensed as above, without any additional terms or conditions.

22
README.tpl Normal file
View File

@@ -0,0 +1,22 @@
[comment]: # (README.md is autogenerated from src/lib.rs by `cargo readme > README.md`)
# {{crate}}
{{readme}}
[rustls]: https://docs.rs/rustls/
[std::sync::Arc]: https://doc.rust-lang.org/stable/alloc/sync/struct.Arc.html
[std::io::Read]: https://doc.rust-lang.org/stable/std/io/trait.Read.html
[Agent]: https://docs.rs/ureq/latest/ureq/struct.Agent.html
[get()]: https://docs.rs/ureq/latest/ureq/fn.get.html
[post()]: https://docs.rs/ureq/latest/ureq/fn.post.html
[put()]: https://docs.rs/ureq/latest/ureq/fn.put.html
[Request]: https://docs.rs/ureq/latest/ureq/struct.Request.html
[Request::call()]: https://docs.rs/ureq/latest/ureq/struct.Request.html#method.call
[Request::send()]: https://docs.rs/ureq/latest/ureq/struct.Request.html#method.send
[Request::send_bytes()]: https://docs.rs/ureq/latest/ureq/struct.Request.html#method.send_bytes
[Request::send_string()]: https://docs.rs/ureq/latest/ureq/struct.Request.html#method.send_string
[Request::send_json()]: https://docs.rs/ureq/latest/ureq/struct.Request.html#method.send_json
[Request::send_form()]: https://docs.rs/ureq/latest/ureq/struct.Request.html#method.send_form
[Response::into_json()]: https://docs.rs/ureq/latest/ureq/struct.Response.html#method.into_json
[Response::into_string()]: https://docs.rs/ureq/latest/ureq/struct.Response.html#method.into_string

View File

@@ -1,17 +1,108 @@
use crate::response::Response; use url::Url;
use std::fmt;
use std::io::{self, ErrorKind};
use std::error;
use std::fmt::{self, Display};
use std::io::{self};
use crate::Response;
/// An error that may occur when processing a Request.
#[derive(Debug)] #[derive(Debug)]
pub enum Error { pub struct Error {
kind: ErrorKind,
message: Option<String>,
url: Option<Url>,
source: Option<Box<dyn error::Error>>,
response: Option<Box<Response>>,
}
impl Display for Error {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
if let Some(url) = &self.url {
write!(f, "{}: ", url)?;
}
if let Some(response) = &self.response {
write!(f, "status code {}", response.status())?;
} else {
write!(f, "{:?}", self.kind)?;
}
if let Some(message) = &self.message {
write!(f, ": {}", message)?;
}
if let Some(source) = &self.source {
write!(f, ": {}", source)?;
}
Ok(())
}
}
impl error::Error for Error {
fn source(&self) -> Option<&(dyn error::Error + 'static)> {
self.source.as_deref()
}
}
impl Error {
pub(crate) fn new(kind: ErrorKind, message: Option<String>) -> Self {
Error {
kind,
message,
url: None,
source: None,
response: None,
}
}
pub(crate) fn url(mut self, url: Url) -> Self {
self.url = Some(url);
self
}
pub(crate) fn src(mut self, e: impl error::Error + 'static) -> Self {
self.source = Some(Box::new(e));
self
}
pub(crate) fn response(mut self, response: Response) -> Self {
self.response = Some(Box::new(response));
self
}
pub(crate) fn kind(&self) -> ErrorKind {
self.kind
}
/// Return true iff the error was due to a connection closing.
pub(crate) fn connection_closed(&self) -> bool {
if self.kind() != ErrorKind::Io {
return false;
}
let source = match self.source.as_ref() {
Some(e) => e,
None => return false,
};
let ioe: &Box<io::Error> = match source.downcast_ref() {
Some(e) => e,
None => return false,
};
match ioe.kind() {
io::ErrorKind::ConnectionAborted => true,
io::ErrorKind::ConnectionReset => true,
_ => false,
}
}
}
/// One of the types of error the can occur when processing a Request.
#[derive(Debug, PartialEq, Clone, Copy)]
pub enum ErrorKind {
/// The url could not be understood. /// The url could not be understood.
BadUrl(String), BadUrl,
/// The url scheme could not be understood. /// The url scheme could not be understood.
UnknownScheme(String), UnknownScheme,
/// DNS lookup failed. /// DNS lookup failed.
DnsFailed(String), DnsFailed,
/// Connection to server failed. /// Connection to server failed.
ConnectionFailed(String), ConnectionFailed,
/// Too many redirects. /// Too many redirects.
TooManyRedirects, TooManyRedirects,
/// A status line we don't understand `HTTP/1.1 200 OK`. /// A status line we don't understand `HTTP/1.1 200 OK`.
@@ -19,7 +110,7 @@ pub enum Error {
/// A header line that couldn't be parsed. /// A header line that couldn't be parsed.
BadHeader, BadHeader,
/// Some unspecified `std::io::Error`. /// Some unspecified `std::io::Error`.
Io(io::Error), Io,
/// Proxy information was not properly formatted /// Proxy information was not properly formatted
BadProxy, BadProxy,
/// Proxy credentials were not properly formatted /// Proxy credentials were not properly formatted
@@ -31,44 +122,60 @@ pub enum Error {
/// HTTP status code indicating an error (e.g. 4xx, 5xx) /// HTTP status code indicating an error (e.g. 4xx, 5xx)
/// Read the inner response body for details and to return /// Read the inner response body for details and to return
/// the connection to the pool. /// the connection to the pool.
HTTP(Box<Response>), HTTP,
} }
impl Error { impl ErrorKind {
// Return true iff the error was due to a connection closing. pub(crate) fn new(self) -> Error {
pub(crate) fn connection_closed(&self) -> bool { Error::new(self, None)
match self {
Error::Io(e) if e.kind() == ErrorKind::ConnectionAborted => true,
Error::Io(e) if e.kind() == ErrorKind::ConnectionReset => true,
_ => false,
} }
pub(crate) fn msg(self, s: &str) -> Error {
Error::new(self, Some(s.to_string()))
} }
} }
impl From<io::Error> for Error { impl From<io::Error> for Error {
fn from(err: io::Error) -> Error { fn from(err: io::Error) -> Error {
Error::Io(err) ErrorKind::Io.new().src(err)
} }
} }
impl fmt::Display for Error { impl fmt::Display for ErrorKind {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self { match self {
Error::BadUrl(url) => write!(f, "Bad URL: {}", url), ErrorKind::BadUrl => write!(f, "Bad URL"),
Error::UnknownScheme(scheme) => write!(f, "Unknown Scheme: {}", scheme), ErrorKind::UnknownScheme => write!(f, "Unknown Scheme"),
Error::DnsFailed(err) => write!(f, "Dns Failed: {}", err), ErrorKind::DnsFailed => write!(f, "Dns Failed"),
Error::ConnectionFailed(err) => write!(f, "Connection Failed: {}", err), ErrorKind::ConnectionFailed => write!(f, "Connection Failed"),
Error::TooManyRedirects => write!(f, "Too Many Redirects"), ErrorKind::TooManyRedirects => write!(f, "Too Many Redirects"),
Error::BadStatus => write!(f, "Bad Status"), ErrorKind::BadStatus => write!(f, "Bad Status"),
Error::BadHeader => write!(f, "Bad Header"), ErrorKind::BadHeader => write!(f, "Bad Header"),
Error::Io(ioe) => write!(f, "Network Error: {}", ioe), ErrorKind::Io => write!(f, "Network Error"),
Error::BadProxy => write!(f, "Malformed proxy"), ErrorKind::BadProxy => write!(f, "Malformed proxy"),
Error::BadProxyCreds => write!(f, "Failed to parse proxy credentials"), ErrorKind::BadProxyCreds => write!(f, "Failed to parse proxy credentials"),
Error::ProxyConnect => write!(f, "Proxy failed to connect"), ErrorKind::ProxyConnect => write!(f, "Proxy failed to connect"),
Error::InvalidProxyCreds => write!(f, "Provided proxy credentials are incorrect"), ErrorKind::InvalidProxyCreds => write!(f, "Provided proxy credentials are incorrect"),
Error::HTTP(response) => write!(f, "HTTP status {}", response.status()), ErrorKind::HTTP => write!(f, "HTTP status error"),
} }
} }
} }
impl std::error::Error for Error {} #[test]
fn status_code_error() {
let mut err = Error::new(ErrorKind::HTTP, None);
err = err.response(Response::new(500, "Internal Server Error", "too much going on").unwrap());
assert_eq!(err.to_string(), "status code 500");
err = err.url("http://example.com/".parse().unwrap());
assert_eq!(err.to_string(), "http://example.com/: status code 500");
}
#[test]
fn io_error() {
let ioe = io::Error::new(io::ErrorKind::TimedOut, "too slow");
let mut err = Error::new(ErrorKind::Io, Some("oops".to_string())).src(ioe);
err = err.url("http://example.com/".parse().unwrap());
assert_eq!(err.to_string(), "http://example.com/: Io: oops: too slow");
}

View File

@@ -1,4 +1,4 @@
use crate::error::Error; use crate::error::{Error, ErrorKind};
use std::fmt; use std::fmt;
use std::str::FromStr; use std::str::FromStr;
@@ -66,10 +66,11 @@ impl Header {
pub(crate) fn validate(&self) -> Result<(), Error> { pub(crate) fn validate(&self) -> Result<(), Error> {
if !valid_name(self.name()) || !valid_value(&self.line.as_str()[self.index + 1..]) { if !valid_name(self.name()) || !valid_value(&self.line.as_str()[self.index + 1..]) {
return Err(Error::BadHeader); Err(ErrorKind::BadHeader.msg(&format!("invalid header '{}'", self.line)))
} } else {
Ok(()) Ok(())
} }
}
} }
pub fn get_header<'a, 'b>(headers: &'b [Header], name: &'a str) -> Option<&'b str> { pub fn get_header<'a, 'b>(headers: &'b [Header], name: &'a str) -> Option<&'b str> {
@@ -150,11 +151,13 @@ impl FromStr for Header {
fn from_str(s: &str) -> Result<Self, Self::Err> { fn from_str(s: &str) -> Result<Self, Self::Err> {
// //
let line = s.to_string(); let line = s.to_string();
let index = s.find(':').ok_or_else(|| Error::BadHeader)?; let index = s
.find(':')
.ok_or_else(|| ErrorKind::BadHeader.msg("no colon in header"))?;
// no value? // no value?
if index >= s.len() { if index >= s.len() {
return Err(Error::BadHeader); return Err(ErrorKind::BadHeader.msg("no value in header"));
} }
let header = Header { line, index }; let header = Header { line, index };
@@ -203,7 +206,7 @@ fn test_parse_invalid_name() {
for c in cases { for c in cases {
let result = c.parse::<Header>(); let result = c.parse::<Header>();
assert!( assert!(
matches!(result, Err(Error::BadHeader)), matches!(result, Err(ref e) if e.kind() == ErrorKind::BadHeader),
"'{}'.parse(): expected BadHeader, got {:?}", "'{}'.parse(): expected BadHeader, got {:?}",
c, c,
result result

View File

@@ -1,84 +1,136 @@
#![forbid(unsafe_code)] #![forbid(unsafe_code)]
#![warn(clippy::all)] #![warn(clippy::all)]
//! ureq is a minimal request library. //! A simple, safe HTTP client.
//! //!
//! The goals of this library are: //! Ureq's first priority is being easy for you to use. It's great for
//! anyone who wants a low-overhead HTTP client that just gets the job done. Works
//! very well with HTTP APIs. Its features include cookies, JSON, HTTP proxies,
//! HTTPS, and charset decoding.
//! //!
//! * Minimal dependency tree //! Ureq is in pure Rust for safety and ease of understanding. It avoids using
//! * Obvious API //! `unsafe` directly. It [uses blocking I/O][blocking] instead of async I/O, because that keeps
//! * Blocking API //! the API simple and and keeps dependencies to a minimum. For TLS, ureq uses
//! * No use of unsafe //! [rustls].
//! //!
//! ``` //! [blocking]: #blocking-i-o-for-simplicity
//! // requires feature: `ureq = { version = "*", features = ["json"] }`
//! # #[cfg(feature = "json")] {
//! use ureq::json;
//! //!
//! fn main() -> std::io::Result<()> { //! ## Usage
//! // sync post request of some json.
//! let resp = ureq::post("https://myapi.example.com/ingest")
//! .set("X-My-Header", "Secret")
//! .send_json(json!({
//! "name": "martin",
//! "rust": true
//! }));
//! //!
//! if let Ok(resp) = resp { //! In its simplest form, ureq looks like this:
//! println!("success: {}", resp.into_string()?); //!
//! } else { //! ```rust
//! // This can include errors like failure to parse URL or connect timeout. //! # fn main() -> Result<(), ureq::Error> {
//! println!("error {}", resp.err().unwrap()); //! # ureq::is_test(true);
//! } //! let body: String = ureq::get("http://example.com")
//! Ok(()) //! .set("Accept", "text/html")
//! } //! .call()?
//! .into_string()?;
//! # Ok(())
//! # } //! # }
//! ``` //! ```
//! //!
//! For more involved tasks, you'll want to create an [Agent]. An Agent
//! holds a connection pool for reuse, and a cookie store if you use the
//! "cookies" feature. An Agent can be cheaply cloned due to an internal
//! [Arc](std::sync::Arc) and all clones of an Agent share state among each other. Creating
//! an Agent also allows setting options like the TLS configuration.
//!
//! ```no_run
//! # fn main() -> std::result::Result<(), ureq::Error> {
//! # ureq::is_test(true);
//! use ureq::{Agent, AgentBuilder};
//! use std::time::Duration;
//!
//! let agent: Agent = ureq::AgentBuilder::new()
//! .timeout_read(Duration::from_secs(5))
//! .timeout_write(Duration::from_secs(5))
//! .build();
//! let body: String = agent.get("http://example.com/page")
//! .call()?
//! .into_string()?;
//!
//! // Reuses the connection from previous request.
//! let response: String = agent.put("http://example.com/upload")
//! .set("Authorization", "example-token")
//! .call()?
//! .into_string()?;
//! # Ok(())
//! # }
//! ```
//!
//! Ureq supports sending and receiving json, if you enable the "json" feature:
//!
//! ```rust
//! # #[cfg(feature = "json")]
//! # fn main() -> std::result::Result<(), ureq::Error> {
//! # ureq::is_test(true);
//! // Requires the `json` feature enabled.
//! let resp: String = ureq::post("http://myapi.example.com/ingest")
//! .set("X-My-Header", "Secret")
//! .send_json(ureq::json!({
//! "name": "martin",
//! "rust": true
//! }))?
//! .into_string()?;
//! # Ok(())
//! # }
//! # #[cfg(not(feature = "json"))]
//! # fn main() {}
//! ```
//!
//! ## Features
//!
//! To enable a minimal dependency tree, some features are off by default.
//! You can control them when including ureq as a dependency.
//!
//! `ureq = { version = "*", features = ["json", "charset"] }`
//!
//! * `tls` enables https. This is enabled by default.
//! * `cookies` enables cookies.
//! * `json` enables [Response::into_json()] and [Request::send_json()] via serde_json.
//! * `charset` enables interpreting the charset part of the Content-Type header
//! (e.g. `Content-Type: text/plain; charset=iso-8859-1`). Without this, the
//! library defaults to Rust's built in `utf-8`.
//!
//! # Plain requests //! # Plain requests
//! //!
//! Most standard methods (GET, POST, PUT etc), are supported as functions from the //! Most standard methods (GET, POST, PUT etc), are supported as functions from the
//! top of the library ([`ureq::get`](fn.get.html), [`ureq::post`](fn.post.html), //! top of the library ([get()], [post()], [put()], etc).
//! [`ureq::put`](fn.put.html), etc).
//! //!
//! These top level http method functions create a [Request](struct.Request.html) instance //! These top level http method functions create a [Request] instance
//! which follows a build pattern. The builders are finished using: //! which follows a build pattern. The builders are finished using:
//! //!
//! * [`.call()`](struct.Request.html#method.call) without a request body. //! * [`.call()`][Request::call()] without a request body.
//! * [`.send()`](struct.Request.html#method.send) with a request body as `Read` (chunked encoding support for non-known sized readers). //! * [`.send()`][Request::send()] with a request body as [Read][std::io::Read] (chunked encoding support for non-known sized readers).
//! * [`.send_string()`](struct.Request.html#method.send_string) body as string. //! * [`.send_string()`][Request::send_string()] body as string.
//! * [`.send_bytes()`](struct.Request.html#method.send_bytes) body as bytes. //! * [`.send_bytes()`][Request::send_bytes()] body as bytes.
//! * [`.send_form()`](struct.Request.html#method.send_form) key-value pairs as application/x-www-form-urlencoded. //! * [`.send_form()`][Request::send_form()] key-value pairs as application/x-www-form-urlencoded.
//! //!
//! # JSON //! # JSON
//! //!
//! By enabling the `ureq = { version = "*", features = ["json"] }` feature, //! By enabling the `ureq = { version = "*", features = ["json"] }` feature,
//! the library supports serde json. //! the library supports serde json.
//! //!
//! * [`request.send_json()`](struct.Request.html#method.send_json) send body as serde json. //! * [`request.send_json()`][Request::send_json()] send body as serde json.
//! * [`response.into_json()`](struct.Response.html#method.into_json) transform response to json. //! * [`response.into_json()`][Response::into_json()] transform response to json.
//! //!
//! # Agents //! # Content-Length and Transfer-Encoding
//! //!
//! To maintain a state, cookies, between requests, you use an [agent](struct.Agent.html). //! The library will send a Content-Length header on requests with bodies of
//! Agents also follow the build pattern. Agents are created with //! known size, in other words, those sent with
//! [`ureq::agent()`](struct.Agent.html). //! [`.send_string()`][Request::send_string()],
//! [`.send_bytes()`][Request::send_bytes()],
//! [`.send_form()`][Request::send_form()], or
//! [`.send_json()`][Request::send_json()]. If you send a
//! request body with [`.send()`][Request::send()],
//! which takes a [Read][std::io::Read] of unknown size, ureq will send Transfer-Encoding:
//! chunked, and encode the body accordingly. Bodyless requests
//! (GETs and HEADs) are sent with [`.call()`][Request::call()]
//! and ureq adds neither a Content-Length nor a Transfer-Encoding header.
//! //!
//! # Content-Length //! If you set your own Content-Length or Transfer-Encoding header before
//! //! sending the body, ureq will respect that header by not overriding it,
//! The library will set the content length on the request when using //! and by encoding the body or not, as indicated by the headers you set.
//! [`.send_string()`](struct.Request.html#method.send_string) or
//! [`.send_json()`](struct.Request.html#method.send_json). In other cases the user
//! can optionally `request.set("Content-Length", 1234)`.
//!
//! For responses, if the `Content-Length` header is present, the methods that reads the
//! body (as string, json or read trait) are all limited to the length specified in the header.
//!
//! # Transfer-Encoding: chunked
//!
//! Dechunking is a response body is done automatically if the response headers contains
//! a `Transfer-Encoding` header.
//!
//! Sending a chunked request body is done by setting the header prior to sending a body.
//! //!
//! ``` //! ```
//! let resp = ureq::post("http://my-server.com/ingest") //! let resp = ureq::post("http://my-server.com/ingest")
@@ -91,15 +143,52 @@
//! By enabling the `ureq = { version = "*", features = ["charset"] }` feature, //! By enabling the `ureq = { version = "*", features = ["charset"] }` feature,
//! the library supports sending/receiving other character sets than `utf-8`. //! the library supports sending/receiving other character sets than `utf-8`.
//! //!
//! For [`response.into_string()`](struct.Response.html#method.into_string) we read the //! For [`response.into_string()`][Response::into_string()] we read the
//! header `Content-Type: text/plain; charset=iso-8859-1` and if it contains a charset //! header `Content-Type: text/plain; charset=iso-8859-1` and if it contains a charset
//! specification, we try to decode the body using that encoding. In the absence of, or failing //! specification, we try to decode the body using that encoding. In the absence of, or failing
//! to interpret the charset, we fall back on `utf-8`. //! to interpret the charset, we fall back on `utf-8`.
//! //!
//! Similarly when using [`request.send_string()`](struct.Request.html#method.send_string), //! Similarly when using [`request.send_string()`][Request::send_string()],
//! we first check if the user has set a `; charset=<whatwg charset>` and attempt //! we first check if the user has set a `; charset=<whatwg charset>` and attempt
//! to encode the request body using that. //! to encode the request body using that.
//! //!
//! # Blocking I/O for simplicity
//!
//! Rust supports [asynchronous (async) I/O][async], but ureq does not use it. Async I/O
//! allows serving many concurrent requests without high costs in memory and OS threads. But
//! it comes at a cost in complexity. Async programs need to pull in a runtime (usually
//! [async-std] or [tokio]). They also need async variants of any method that might block, and of
//! [any method that might call another method that might block][what-color]. That means async
//! programs usually have a lot of dependencies - which adds to compile times, and increases
//! risk.
//!
//! The costs of async are worth paying, if you're writing an HTTP server that must serve
//! many many clients with minimal overhead. However, for HTTP _clients_, we believe that the
//! cost is usually not worth paying. The low-cost alternative to async I/O is blocking I/O,
//! which has a different price: it requires an OS thread per concurrent request. However,
//! that price is usually not high: most HTTP clients make requests sequentially, or with
//! low concurrency.
//!
//! That's why ureq uses blocking I/O and plans to stay that way. Other HTTP clients offer both
//! an async API and a blocking API, but we want to offer a blocking API without pulling in all
//! the dependencies required by an async API.
//!
//! [async]: https://rust-lang.github.io/async-book/01_getting_started/02_why_async.html
//! [async-std]: https://github.com/async-rs/async-std#async-std
//! [tokio]: https://github.com/tokio-rs/tokio#tokio
//! [what-color]: https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/
//!
//! ------------------------------------------------------------------------------
//!
//! Ureq is inspired by other great HTTP clients like
//! [superagent](http://visionmedia.github.io/superagent/) and
//! [the fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API).
//!
//! If ureq is not what you're looking for, check out these other Rust HTTP clients:
//! [surf](https://crates.io/crates/surf), [reqwest](https://crates.io/crates/reqwest),
//! [isahc](https://crates.io/crates/isahc), [attohttpc](https://crates.io/crates/attohttpc),
//! [actix-web](https://crates.io/crates/actix-web), and [hyper](https://crates.io/crates/hyper).
//!
mod agent; mod agent;
mod body; mod body;
@@ -126,7 +215,7 @@ mod testserver;
pub use crate::agent::Agent; pub use crate::agent::Agent;
pub use crate::agent::AgentBuilder; pub use crate::agent::AgentBuilder;
pub use crate::error::Error; pub use crate::error::{Error, ErrorKind};
pub use crate::header::Header; pub use crate::header::Header;
pub use crate::proxy::Proxy; pub use crate::proxy::Proxy;
pub use crate::request::Request; pub use crate::request::Request;
@@ -247,6 +336,7 @@ mod tests {
#[cfg(feature = "tls")] #[cfg(feature = "tls")]
fn connect_https_invalid_name() { fn connect_https_invalid_name() {
let result = get("https://example.com{REQUEST_URI}/").call(); let result = get("https://example.com{REQUEST_URI}/").call();
assert!(matches!(result.unwrap_err(), Error::DnsFailed(_))); let e = ErrorKind::DnsFailed;
assert_eq!(result.unwrap_err().kind(), e);
} }
} }

View File

@@ -1,4 +1,4 @@
use crate::error::Error; use crate::error::{Error, ErrorKind};
/// Proxy protocol /// Proxy protocol
#[derive(Clone, Copy, Debug, Eq, Hash, PartialEq)] #[derive(Clone, Copy, Debug, Eq, Hash, PartialEq)]
@@ -30,7 +30,7 @@ impl Proxy {
.into_iter(); .into_iter();
if parts.len() != 2 { if parts.len() != 2 {
Err(Error::BadProxyCreds) Err(ErrorKind::BadProxyCreds.new())
} else { } else {
Ok(( Ok((
parts.next().map(String::from), parts.next().map(String::from),
@@ -46,14 +46,14 @@ impl Proxy {
match host { match host {
Some(host) => { Some(host) => {
let mut parts = host.as_ref().split(':').collect::<Vec<&str>>().into_iter(); let mut parts = host.as_ref().split(':').collect::<Vec<&str>>().into_iter();
let host = parts.next().ok_or(Error::BadProxy)?; let host = parts.next().ok_or(ErrorKind::BadProxy.new())?;
let port = parts.next(); let port = parts.next();
Ok(( Ok((
String::from(host), String::from(host),
port.and_then(|port| port.parse::<u32>().ok()), port.and_then(|port| port.parse::<u32>().ok()),
)) ))
} }
None => Err(Error::BadProxy), None => Err(ErrorKind::BadProxy.new()),
} }
} }
@@ -84,7 +84,7 @@ impl Proxy {
Some("http") => Proto::HTTPConnect, Some("http") => Proto::HTTPConnect,
Some("socks") => Proto::SOCKS5, Some("socks") => Proto::SOCKS5,
Some("socks5") => Proto::SOCKS5, Some("socks5") => Proto::SOCKS5,
_ => return Err(Error::BadProxy), _ => return Err(ErrorKind::BadProxy.new()),
} }
} else { } else {
Proto::HTTPConnect Proto::HTTPConnect
@@ -92,7 +92,7 @@ impl Proxy {
let remaining_parts = proxy_parts.next(); let remaining_parts = proxy_parts.next();
if remaining_parts == None { if remaining_parts == None {
return Err(Error::BadProxy); return Err(ErrorKind::BadProxy.new());
} }
let mut creds_server_port_parts = remaining_parts let mut creds_server_port_parts = remaining_parts
@@ -152,13 +152,19 @@ Proxy-Connection: Keep-Alive\r\n\
pub(crate) fn verify_response(response: &[u8]) -> Result<(), Error> { pub(crate) fn verify_response(response: &[u8]) -> Result<(), Error> {
let response_string = String::from_utf8_lossy(response); let response_string = String::from_utf8_lossy(response);
let top_line = response_string.lines().next().ok_or(Error::ProxyConnect)?; let top_line = response_string
let status_code = top_line.split_whitespace().nth(1).ok_or(Error::BadProxy)?; .lines()
.next()
.ok_or(ErrorKind::ProxyConnect.new())?;
let status_code = top_line
.split_whitespace()
.nth(1)
.ok_or(ErrorKind::BadProxy.new())?;
match status_code { match status_code {
"200" => Ok(()), "200" => Ok(()),
"401" | "407" => Err(Error::InvalidProxyCreds), "401" | "407" => Err(ErrorKind::InvalidProxyCreds.new()),
_ => Err(Error::BadProxy), _ => Err(ErrorKind::BadProxy.new()),
} }
} }
} }

View File

@@ -3,12 +3,12 @@ use std::io::Read;
use url::{form_urlencoded, Url}; use url::{form_urlencoded, Url};
use crate::agent::Agent;
use crate::body::Payload; use crate::body::Payload;
use crate::error::Error; use crate::error::ErrorKind;
use crate::header::{self, Header}; use crate::header::{self, Header};
use crate::unit::{self, Unit}; use crate::unit::{self, Unit};
use crate::Response; use crate::Response;
use crate::{agent::Agent, error::Error};
#[cfg(feature = "json")] #[cfg(feature = "json")]
use super::SerdeValue; use super::SerdeValue;
@@ -58,7 +58,10 @@ impl Request {
} }
} }
/// Executes the request and blocks the caller until done. /// Sends the request with no body and blocks the caller until done.
///
/// Use this with GET, HEAD, or TRACE. It sends neither Content-Length
/// nor Transfer-Encoding.
/// ///
/// ``` /// ```
/// # fn main() -> Result<(), ureq::Error> { /// # fn main() -> Result<(), ureq::Error> {
@@ -76,19 +79,20 @@ impl Request {
for h in &self.headers { for h in &self.headers {
h.validate()?; h.validate()?;
} }
let mut url: Url = self let mut url: Url = self.url.parse().map_err(|e: url::ParseError| {
.url ErrorKind::BadUrl
.parse() .msg(&format!("failed to parse URL '{}'", self.url))
.map_err(|e: url::ParseError| Error::BadUrl(e.to_string()))?; .src(e)
})?;
for (name, value) in self.query_params.clone() { for (name, value) in self.query_params.clone() {
url.query_pairs_mut().append_pair(&name, &value); url.query_pairs_mut().append_pair(&name, &value);
} }
let reader = payload.into_read(); let reader = payload.into_read();
let unit = Unit::new(&self.agent, &self.method, &url, &self.headers, &reader); let unit = Unit::new(&self.agent, &self.method, &url, &self.headers, &reader);
let response = unit::connect(unit, true, 0, reader, false)?; let response = unit::connect(unit, true, 0, reader, false).map_err(|e| e.url(url))?;
if response.error() && self.error_on_non_2xx { if response.error() && self.error_on_non_2xx {
Err(Error::HTTP(response.into())) Err(ErrorKind::HTTP.new().response(response))
} else { } else {
Ok(response) Ok(response)
} }

View File

@@ -1,10 +1,10 @@
use std::fmt; use std::fmt;
use std::io::{self, Cursor, ErrorKind, Read}; use std::io::{self, Cursor, Read};
use std::str::FromStr; use std::str::FromStr;
use chunked_transfer::Decoder as ChunkDecoder; use chunked_transfer::Decoder as ChunkDecoder;
use crate::error::Error; use crate::error::{Error, ErrorKind};
use crate::header::Header; use crate::header::Header;
use crate::pool::PoolReturnRead; use crate::pool::PoolReturnRead;
use crate::stream::{DeadlineStream, Stream}; use crate::stream::{DeadlineStream, Stream};
@@ -400,13 +400,13 @@ impl Response {
// We make a clone of the original error since serde_json::Error doesn't // We make a clone of the original error since serde_json::Error doesn't
// let us get the wrapped error instance back. // let us get the wrapped error instance back.
if let Some(ioe) = e.source().and_then(|s| s.downcast_ref::<io::Error>()) { if let Some(ioe) = e.source().and_then(|s| s.downcast_ref::<io::Error>()) {
if ioe.kind() == ErrorKind::TimedOut { if ioe.kind() == io::ErrorKind::TimedOut {
return io_err_timeout(ioe.to_string()); return io_err_timeout(ioe.to_string());
} }
} }
io::Error::new( io::Error::new(
ErrorKind::InvalidData, io::ErrorKind::InvalidData,
format!("Failed to read JSON: {}", e), format!("Failed to read JSON: {}", e),
) )
}) })
@@ -466,19 +466,21 @@ fn parse_status_line(line: &str) -> Result<(ResponseStatusIndex, u16), Error> {
let mut split = line.splitn(3, ' '); let mut split = line.splitn(3, ' ');
let http_version = split.next().ok_or_else(|| Error::BadStatus)?; let http_version = split.next().ok_or_else(|| ErrorKind::BadStatus.new())?;
if http_version.len() < 5 { if http_version.len() < 5 {
return Err(Error::BadStatus); return Err(ErrorKind::BadStatus.new());
} }
let index1 = http_version.len(); let index1 = http_version.len();
let status = split.next().ok_or_else(|| Error::BadStatus)?; let status = split.next().ok_or_else(|| ErrorKind::BadStatus.new())?;
if status.len() < 2 { if status.len() < 2 {
return Err(Error::BadStatus); return Err(ErrorKind::BadStatus.new());
} }
let index2 = index1 + status.len(); let index2 = index1 + status.len();
let status = status.parse::<u16>().map_err(|_| Error::BadStatus)?; let status = status
.parse::<u16>()
.map_err(|_| ErrorKind::BadStatus.new())?;
Ok(( Ok((
ResponseStatusIndex { ResponseStatusIndex {
@@ -533,7 +535,7 @@ fn read_next_line<R: Read>(reader: &mut R) -> io::Result<String> {
if amt == 0 { if amt == 0 {
return Err(io::Error::new( return Err(io::Error::new(
ErrorKind::ConnectionAborted, io::ErrorKind::ConnectionAborted,
"Unexpected EOF", "Unexpected EOF",
)); ));
} }
@@ -542,8 +544,9 @@ fn read_next_line<R: Read>(reader: &mut R) -> io::Result<String> {
if byte == b'\n' && prev_byte_was_cr { if byte == b'\n' && prev_byte_was_cr {
buf.pop(); // removing the '\r' buf.pop(); // removing the '\r'
return String::from_utf8(buf) return String::from_utf8(buf).map_err(|_| {
.map_err(|_| io::Error::new(ErrorKind::InvalidInput, "Header is not in ASCII")); io::Error::new(io::ErrorKind::InvalidInput, "Header is not in ASCII")
});
} }
prev_byte_was_cr = byte == b'\r'; prev_byte_was_cr = byte == b'\r';
@@ -587,7 +590,7 @@ impl<R: Read> Read for LimitedRead<R> {
// received, the recipient MUST consider the message to be // received, the recipient MUST consider the message to be
// incomplete and close the connection. // incomplete and close the connection.
Ok(0) => Err(io::Error::new( Ok(0) => Err(io::Error::new(
ErrorKind::InvalidData, io::ErrorKind::InvalidData,
"response body closed before all bytes were read", "response body closed before all bytes were read",
)), )),
Ok(amount) => { Ok(amount) => {
@@ -736,7 +739,7 @@ mod tests {
fn parse_borked_header() { fn parse_borked_header() {
let s = "HTTP/1.1 BORKED\r\n".to_string(); let s = "HTTP/1.1 BORKED\r\n".to_string();
let err = s.parse::<Response>().unwrap_err(); let err = s.parse::<Response>().unwrap_err();
assert!(matches!(err, Error::BadStatus)); assert_eq!(err.kind(), ErrorKind::BadStatus);
} }
} }

View File

@@ -1,6 +1,6 @@
use log::debug; use log::debug;
use std::fmt; use std::fmt;
use std::io::{self, BufRead, BufReader, Cursor, ErrorKind, Read, Write}; use std::io::{self, BufRead, BufReader, Cursor, Read, Write};
use std::net::SocketAddr; use std::net::SocketAddr;
use std::net::TcpStream; use std::net::TcpStream;
use std::time::Duration; use std::time::Duration;
@@ -15,10 +15,10 @@ use rustls::StreamOwned;
#[cfg(feature = "socks-proxy")] #[cfg(feature = "socks-proxy")]
use socks::{TargetAddr, ToTargetAddr}; use socks::{TargetAddr, ToTargetAddr};
use crate::proxy::Proto;
use crate::proxy::Proxy; use crate::proxy::Proxy;
use crate::{error::Error, proxy::Proto};
use crate::error::Error; use crate::error::ErrorKind;
use crate::unit::Unit; use crate::unit::Unit;
#[allow(clippy::large_enum_variant)] #[allow(clippy::large_enum_variant)]
@@ -67,7 +67,7 @@ impl Read for DeadlineStream {
// causes ErrorKind::WouldBlock instead of ErrorKind::TimedOut. // causes ErrorKind::WouldBlock instead of ErrorKind::TimedOut.
// Since the socket most definitely not set_nonblocking(true), // Since the socket most definitely not set_nonblocking(true),
// we can safely normalize WouldBlock to TimedOut // we can safely normalize WouldBlock to TimedOut
if e.kind() == ErrorKind::WouldBlock { if e.kind() == io::ErrorKind::WouldBlock {
return io_err_timeout("timed out reading response".to_string()); return io_err_timeout("timed out reading response".to_string());
} }
e e
@@ -86,7 +86,7 @@ fn time_until_deadline(deadline: Instant) -> io::Result<Duration> {
} }
pub(crate) fn io_err_timeout(error: String) -> io::Error { pub(crate) fn io_err_timeout(error: String) -> io::Error {
io::Error::new(ErrorKind::TimedOut, error) io::Error::new(io::ErrorKind::TimedOut, error)
} }
impl fmt::Debug for Stream { impl fmt::Debug for Stream {
@@ -119,7 +119,7 @@ impl Stream {
let result = match stream.peek(&mut buf) { let result = match stream.peek(&mut buf) {
Ok(0) => Ok(true), Ok(0) => Ok(true),
Ok(_) => Ok(false), // TODO: Maybe this should produce an "unexpected response" error Ok(_) => Ok(false), // TODO: Maybe this should produce an "unexpected response" error
Err(e) if e.kind() == ErrorKind::WouldBlock => Ok(false), Err(e) if e.kind() == io::ErrorKind::WouldBlock => Ok(false),
Err(e) => Err(e), Err(e) => Err(e),
}; };
stream.set_nonblocking(false)?; stream.set_nonblocking(false)?;
@@ -241,7 +241,7 @@ fn read_https(
#[allow(deprecated)] #[allow(deprecated)]
#[cfg(feature = "tls")] #[cfg(feature = "tls")]
fn is_close_notify(e: &std::io::Error) -> bool { fn is_close_notify(e: &std::io::Error) -> bool {
if e.kind() != ErrorKind::ConnectionAborted { if e.kind() != io::ErrorKind::ConnectionAborted {
return false; return false;
} }
@@ -313,7 +313,7 @@ pub(crate) fn connect_https(unit: &Unit, hostname: &str) -> Result<Stream, Error
let port = unit.url.port().unwrap_or(443); let port = unit.url.port().unwrap_or(443);
let sni = webpki::DNSNameRef::try_from_ascii_str(hostname) let sni = webpki::DNSNameRef::try_from_ascii_str(hostname)
.map_err(|err| Error::DnsFailed(err.to_string()))?; .map_err(|err| ErrorKind::DnsFailed.new().src(err))?;
let tls_conf: &Arc<rustls::ClientConfig> = unit let tls_conf: &Arc<rustls::ClientConfig> = unit
.agent .agent
.config .config
@@ -347,10 +347,10 @@ pub(crate) fn connect_host(unit: &Unit, hostname: &str, port: u16) -> Result<Tcp
let sock_addrs = unit let sock_addrs = unit
.resolver() .resolver()
.resolve(&netloc) .resolve(&netloc)
.map_err(|e| Error::DnsFailed(format!("{}", e)))?; .map_err(|e| ErrorKind::DnsFailed.new().src(e))?;
if sock_addrs.is_empty() { if sock_addrs.is_empty() {
return Err(Error::DnsFailed(format!("No ip address for {}", hostname))); return Err(ErrorKind::DnsFailed.msg(&format!("No ip address for {}", hostname)));
} }
let proto = if let Some(ref proxy) = proxy { let proto = if let Some(ref proxy) = proxy {
@@ -396,9 +396,10 @@ pub(crate) fn connect_host(unit: &Unit, hostname: &str, port: u16) -> Result<Tcp
let mut stream = if let Some(stream) = any_stream { let mut stream = if let Some(stream) = any_stream {
stream stream
} else if let Some(e) = any_err {
return Err(ErrorKind::ConnectionFailed.msg("Connect error").src(e));
} else { } else {
let err = Error::ConnectionFailed(format!("{}", any_err.expect("Connect error"))); panic!("shouldn't happen: failed to connect to all IPs, but no error");
return Err(err);
}; };
if let Some(deadline) = unit.deadline { if let Some(deadline) = unit.deadline {
@@ -445,11 +446,13 @@ fn socks5_local_nslookup(
let addrs: Vec<SocketAddr> = unit let addrs: Vec<SocketAddr> = unit
.resolver() .resolver()
.resolve(&format!("{}:{}", hostname, port)) .resolve(&format!("{}:{}", hostname, port))
.map_err(|e| std::io::Error::new(ErrorKind::NotFound, format!("DNS failure: {}.", e)))?; .map_err(|e| {
std::io::Error::new(io::ErrorKind::NotFound, format!("DNS failure: {}.", e))
})?;
if addrs.is_empty() { if addrs.is_empty() {
return Err(std::io::Error::new( return Err(std::io::Error::new(
ErrorKind::NotFound, io::ErrorKind::NotFound,
"DNS failure: no socket addrs found.", "DNS failure: no socket addrs found.",
)); ));
} }
@@ -458,7 +461,7 @@ fn socks5_local_nslookup(
Ok(addr) => Ok(addr), Ok(addr) => Ok(addr),
Err(err) => { Err(err) => {
return Err(std::io::Error::new( return Err(std::io::Error::new(
ErrorKind::NotFound, io::ErrorKind::NotFound,
format!("DNS failure: {}.", err), format!("DNS failure: {}.", err),
)) ))
} }
@@ -579,7 +582,7 @@ fn connect_socks5(
_port: u16, _port: u16,
) -> Result<TcpStream, std::io::Error> { ) -> Result<TcpStream, std::io::Error> {
Err(std::io::Error::new( Err(std::io::Error::new(
ErrorKind::Other, io::ErrorKind::Other,
"SOCKS5 feature disabled.", "SOCKS5 feature disabled.",
)) ))
} }
@@ -592,10 +595,12 @@ pub(crate) fn connect_test(unit: &Unit) -> Result<Stream, Error> {
#[cfg(not(test))] #[cfg(not(test))]
pub(crate) fn connect_test(unit: &Unit) -> Result<Stream, Error> { pub(crate) fn connect_test(unit: &Unit) -> Result<Stream, Error> {
Err(Error::UnknownScheme(unit.url.scheme().to_string())) Err(ErrorKind::UnknownScheme.msg(&format!("unknown scheme '{}'", unit.url.scheme())))
} }
#[cfg(not(feature = "tls"))] #[cfg(not(feature = "tls"))]
pub(crate) fn connect_https(unit: &Unit, _hostname: &str) -> Result<Stream, Error> { pub(crate) fn connect_https(unit: &Unit, _hostname: &str) -> Result<Stream, Error> {
Err(Error::UnknownScheme(unit.url.scheme().to_string())) Err(ErrorKind::UnknownScheme
.msg("URL has 'https:' scheme but ureq was build without HTTP support")
.url(unit.url.clone()))
} }

View File

@@ -1,5 +1,6 @@
#![allow(dead_code)] #![allow(dead_code)]
use crate::error::Error;
use crate::testserver::{read_request, TestServer}; use crate::testserver::{read_request, TestServer};
use std::io::{self, Read, Write}; use std::io::{self, Read, Write};
use std::net::TcpStream; use std::net::TcpStream;

View File

@@ -1,10 +1,10 @@
use crate::error::Error;
use crate::stream::Stream;
use crate::unit::Unit; use crate::unit::Unit;
use crate::{error::Error};
use crate::{stream::Stream};
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use std::collections::HashMap;
use std::io::{Cursor, Write}; use std::io::{Cursor, Write};
use std::sync::{Arc, Mutex}; use std::sync::{Arc, Mutex};
use std::{collections::HashMap};
mod agent_test; mod agent_test;
mod body_read; mod body_read;

View File

@@ -4,7 +4,7 @@ use std::{
}; };
use testserver::{self, TestServer}; use testserver::{self, TestServer};
use crate::test; use crate::{error::Error, test};
use super::super::*; use super::super::*;
@@ -34,7 +34,7 @@ fn redirect_many() {
.build() .build()
.get("test://host/redirect_many1") .get("test://host/redirect_many1")
.call(); .call();
assert!(matches!(result, Err(Error::TooManyRedirects))); assert!(matches!(result, Err(e) if e.kind() == ErrorKind::TooManyRedirects));
} }
#[test] #[test]
@@ -104,12 +104,11 @@ fn redirect_host() {
Ok(()) Ok(())
}); });
let url = format!("http://localhost:{}/", srv.port); let url = format!("http://localhost:{}/", srv.port);
let resp = crate::Agent::new().get(&url).call(); let result = crate::Agent::new().get(&url).call();
let err = resp.err();
assert!( assert!(
matches!(err, Some(Error::DnsFailed(_))), matches!(result, Err(ref e) if e.kind() == ErrorKind::DnsFailed),
"expected DnsFailed, got: {:?}", "expected Err(DnsFailed), got: {:?}",
err result
); );
} }

View File

@@ -157,13 +157,13 @@ fn non_ascii_header() {
test::set_handler("/non_ascii_header", |_unit| { test::set_handler("/non_ascii_header", |_unit| {
test::make_response(200, "OK", vec!["Wörse: Hädör"], vec![]) test::make_response(200, "OK", vec!["Wörse: Hädör"], vec![])
}); });
let resp = get("test://host/non_ascii_header") let result = get("test://host/non_ascii_header")
.set("Bäd", "Headör") .set("Bäd", "Headör")
.call(); .call();
assert!( assert!(
matches!(resp, Err(Error::BadHeader)), matches!(result, Err(ref e) if e.kind() == ErrorKind::BadHeader),
"expected Some(&BadHeader), got {:?}", "expected Err(BadHeader), got {:?}",
resp result
); );
} }

View File

@@ -1,8 +1,11 @@
use crate::testserver::*; use crate::testserver::*;
use std::io::{self, Write};
use std::net::TcpStream; use std::net::TcpStream;
use std::thread; use std::thread;
use std::time::Duration; use std::time::Duration;
use std::{
error::Error,
io::{self, Write},
};
use super::super::*; use super::super::*;
@@ -96,10 +99,16 @@ fn read_timeout_during_headers() {
let server = TestServer::new(dribble_headers_respond); let server = TestServer::new(dribble_headers_respond);
let url = format!("http://localhost:{}/", server.port); let url = format!("http://localhost:{}/", server.port);
let agent = builder().timeout_read(Duration::from_millis(10)).build(); let agent = builder().timeout_read(Duration::from_millis(10)).build();
let resp = agent.get(&url).call(); let result = agent.get(&url).call();
match resp { match result {
Ok(_) => Err("successful response".to_string()), Ok(_) => Err("successful response".to_string()),
Err(Error::Io(e)) if e.kind() == io::ErrorKind::TimedOut => Ok(()), Err(e) if e.kind() == ErrorKind::Io => {
let ioe: Option<&io::Error> = e.source().and_then(|s| s.downcast_ref());
match ioe {
Some(e) if e.kind() == io::ErrorKind::TimedOut => Ok(()),
_ => Err(format!("wrong error type {:?}", e)),
}
}
Err(e) => Err(format!("Unexpected error type: {:?}", e)), Err(e) => Err(format!("Unexpected error type: {:?}", e)),
} }
.expect("expected timeout but got something else"); .expect("expected timeout but got something else");
@@ -111,10 +120,16 @@ fn overall_timeout_during_headers() {
let server = TestServer::new(dribble_headers_respond); let server = TestServer::new(dribble_headers_respond);
let url = format!("http://localhost:{}/", server.port); let url = format!("http://localhost:{}/", server.port);
let agent = builder().timeout(Duration::from_millis(500)).build(); let agent = builder().timeout(Duration::from_millis(500)).build();
let resp = agent.get(&url).call(); let result = agent.get(&url).call();
match resp { match result {
Ok(_) => Err("successful response".to_string()), Ok(_) => Err("successful response".to_string()),
Err(Error::Io(e)) if e.kind() == io::ErrorKind::TimedOut => Ok(()), Err(e) if e.kind() == ErrorKind::Io => {
let ioe: Option<&io::Error> = e.source().and_then(|s| s.downcast_ref());
match ioe {
Some(e) if e.kind() == io::ErrorKind::TimedOut => Ok(()),
_ => Err(format!("wrong error type {:?}", e)),
}
}
Err(e) => Err(format!("Unexpected error type: {:?}", e)), Err(e) => Err(format!("Unexpected error type: {:?}", e)),
} }
.expect("expected timeout but got something else"); .expect("expected timeout but got something else");

View File

@@ -7,6 +7,7 @@ use url::Url;
#[cfg(feature = "cookies")] #[cfg(feature = "cookies")]
use cookie::Cookie; use cookie::Cookie;
use crate::error::{Error, ErrorKind};
use crate::header; use crate::header;
use crate::resolve::ArcResolver; use crate::resolve::ArcResolver;
use crate::stream::{self, connect_test, Stream}; use crate::stream::{self, connect_test, Stream};
@@ -15,7 +16,7 @@ use crate::{
body::{self, BodySize, Payload, SizedReader}, body::{self, BodySize, Payload, SizedReader},
header::get_header, header::get_header,
}; };
use crate::{Error, Header, Response}; use crate::{Header, Response};
/// A Unit is fully-built Request, ready to execute. /// A Unit is fully-built Request, ready to execute.
/// ///
@@ -173,7 +174,7 @@ pub(crate) fn connect(
let host = unit let host = unit
.url .url
.host_str() .host_str()
.ok_or(Error::BadUrl("no host".to_string()))?; .ok_or(ErrorKind::BadUrl.msg("no host in URL"))?;
let url = &unit.url; let url = &unit.url;
let method = &unit.method; let method = &unit.method;
// open socket // open socket
@@ -234,16 +235,18 @@ pub(crate) fn connect(
// handle redirects // handle redirects
if resp.redirect() && unit.agent.config.redirects > 0 { if resp.redirect() && unit.agent.config.redirects > 0 {
if redirect_count == unit.agent.config.redirects { if redirect_count == unit.agent.config.redirects {
return Err(Error::TooManyRedirects); return Err(ErrorKind::TooManyRedirects.new());
} }
// the location header // the location header
let location = resp.header("location"); let location = resp.header("location");
if let Some(location) = location { if let Some(location) = location {
// join location header to current url in case it it relative // join location header to current url in case it it relative
let new_url = url let new_url = url.join(location).map_err(|e| {
.join(location) ErrorKind::BadUrl
.map_err(|_| Error::BadUrl(format!("Bad redirection: {}", location)))?; .msg(&format!("Bad redirection: {}", location))
.src(e)
})?;
// perform the redirect differently depending on 3xx code. // perform the redirect differently depending on 3xx code.
match resp.status() { match resp.status() {
@@ -302,7 +305,7 @@ fn extract_cookies(agent: &Agent, url: &Url) -> Option<Header> {
fn connect_socket(unit: &Unit, hostname: &str, use_pooled: bool) -> Result<(Stream, bool), Error> { fn connect_socket(unit: &Unit, hostname: &str, use_pooled: bool) -> Result<(Stream, bool), Error> {
match unit.url.scheme() { match unit.url.scheme() {
"http" | "https" | "test" => (), "http" | "https" | "test" => (),
_ => return Err(Error::UnknownScheme(unit.url.scheme().to_string())), scheme => return Err(ErrorKind::UnknownScheme.msg(&format!("unknown scheme '{}'", scheme))),
}; };
if use_pooled { if use_pooled {
let agent = &unit.agent; let agent = &unit.agent;
@@ -324,7 +327,7 @@ fn connect_socket(unit: &Unit, hostname: &str, use_pooled: bool) -> Result<(Stre
"http" => stream::connect_http(&unit, hostname), "http" => stream::connect_http(&unit, hostname),
"https" => stream::connect_https(&unit, hostname), "https" => stream::connect_https(&unit, hostname),
"test" => connect_test(&unit), "test" => connect_test(&unit),
_ => Err(Error::UnknownScheme(unit.url.scheme().to_string())), scheme => Err(ErrorKind::UnknownScheme.msg(&format!("unknown scheme {}", scheme))),
}; };
Ok((stream?, false)) Ok((stream?, false))
} }