Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: stephank/hyper-staticfile
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v0.9.5
Choose a base ref
...
head repository: stephank/hyper-staticfile
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: v0.10.0
Choose a head ref

Commits on Nov 25, 2022

  1. Upgrade to Hyper 1.0-rc.1

    stephank committed Nov 25, 2022
    Copy the full SHA
    20e28fd View commit details
  2. Version 0.10.0-alpha.1

    stephank committed Nov 25, 2022
    Copy the full SHA
    fc00bdc View commit details

Commits on Nov 30, 2022

  1. Copy the full SHA
    1e40e31 View commit details
  2. Merge pull request #36 from stephank/fix/drive-letter

    Fix malicious drive letter in path
    stephank authored Nov 30, 2022
    Copy the full SHA
    ceda6cf View commit details
  3. Version 0.10.0-alpha.2

    stephank committed Nov 30, 2022
    Copy the full SHA
    21397f9 View commit details

Commits on Dec 7, 2022

  1. Serve precompressed files

    stephank committed Dec 7, 2022
    Copy the full SHA
    5e9a594 View commit details

Commits on Dec 9, 2022

  1. Make file streams generic

    stephank committed Dec 9, 2022
    Copy the full SHA
    9daf6e8 View commit details
  2. Add a VFS layer

    stephank committed Dec 9, 2022
    Copy the full SHA
    61f6504 View commit details
  3. Copy the full SHA
    bc7dd7b View commit details

Commits on Dec 11, 2022

  1. Copy the full SHA
    4258f77 View commit details
  2. Copy the full SHA
    6d8ae45 View commit details
  3. Copy the full SHA
    71f3237 View commit details
  4. Version 0.10.0-alpha.3

    stephank committed Dec 11, 2022
    Copy the full SHA
    0b45afa View commit details

Commits on Dec 12, 2022

  1. Copy the full SHA
    05b47c8 View commit details
  2. Copy the full SHA
    1fcccdc View commit details
  3. Doc tweaks

    stephank committed Dec 12, 2022
    Copy the full SHA
    de2262a View commit details
  4. Copy the full SHA
    fb277a5 View commit details
  5. Version 0.10.0-alpha.4

    stephank committed Dec 12, 2022
    Copy the full SHA
    2ff8d89 View commit details

Commits on Dec 23, 2022

  1. Copy the full SHA
    f12cadc View commit details
  2. Version 0.10.0-alpha.5

    stephank committed Dec 23, 2022
    Copy the full SHA
    204b7fc View commit details

Commits on Jan 23, 2023

  1. Export low-level types again

    stephank committed Jan 23, 2023
    Copy the full SHA
    6859a09 View commit details

Commits on Mar 24, 2023

  1. Replace tempdir with tempfire

    deprecation notice of tempdir here: https://crates.io/crates/tempdir
    fixes https://rustsec.org/advisories/RUSTSEC-2023-0018.html by removing that dependency
    alexanderkjall committed Mar 24, 2023
    Copy the full SHA
    81f7ca0 View commit details
  2. Merge pull request #40 from alexanderkjall/replace-tempdir-to-tempfile

    Replace tempdir with tempfire
    stephank authored Mar 24, 2023
    Copy the full SHA
    8e917de View commit details

Commits on May 26, 2023

  1. Copy the full SHA
    a840c35 View commit details
  2. Merge pull request #42 from stephank/feat/resolved-path

    Add the resolved file path to `ResolvedFile`
    stephank authored May 26, 2023
    Copy the full SHA
    479fefa View commit details

Commits on May 30, 2023

  1. Update actions/checkout

    stephank committed May 30, 2023
    Copy the full SHA
    b5f21bf View commit details
  2. Version 0.10.0-alpha.6

    stephank committed May 30, 2023
    Copy the full SHA
    48d722f View commit details

Commits on Jul 15, 2023

  1. Copy the full SHA
    41076ac View commit details

Commits on Jul 18, 2023

  1. Merge pull request #43 from aaronriekenberg/main

    Step up to hyper 1.0.0-rc.4
    stephank authored Jul 18, 2023
    Copy the full SHA
    1513079 View commit details
  2. Version 0.10.0-alpha.7

    stephank committed Jul 18, 2023
    Copy the full SHA
    7693e1f View commit details

Commits on Nov 15, 2023

  1. Upgrade to hyper 1.0

    stephank committed Nov 15, 2023
    Copy the full SHA
    51f1308 View commit details
  2. cargo clippy --fix

    stephank committed Nov 15, 2023
    Copy the full SHA
    94d39db View commit details
  3. Merge pull request #45 from stephank/feat/upgrade

    Upgrade to hyper 1.0
    stephank authored Nov 15, 2023
    Copy the full SHA
    960a750 View commit details

Commits on Nov 17, 2023

  1. Add rewrite function

    stephank committed Nov 17, 2023
    Copy the full SHA
    ab267aa View commit details
  2. Copy the full SHA
    3c47005 View commit details

Commits on Nov 18, 2023

  1. Merge pull request #46 from stephank/feat/rewrite

    Add rewrite function
    stephank authored Nov 18, 2023
    Copy the full SHA
    c135201 View commit details
  2. Version 0.10.0

    stephank committed Nov 18, 2023
    Copy the full SHA
    c57fea7 View commit details
Showing with 1,213 additions and 432 deletions.
  1. +3 −3 .github/workflows/check.yml
  2. +9 −7 Cargo.toml
  3. +1 −1 README.md
  4. +33 −15 examples/doc_server.rs
  5. +1 −1 rustfmt.toml
  6. +43 −0 src/body.rs
  7. +15 −21 src/lib.rs
  8. +365 −80 src/resolve.rs
  9. +19 −17 src/response_builder.rs
  10. +59 −31 src/service.rs
  11. +70 −139 src/util/file_bytes_stream.rs
  12. +47 −34 src/util/file_response_builder.rs
  13. +2 −3 src/util/mod.rs
  14. +0 −23 src/util/open_with_metadata.rs
  15. +8 −12 src/util/requested_path.rs
  16. +395 −0 src/vfs.rs
  17. +8 −13 tests/hyper.rs
  18. +135 −32 tests/static.rs
6 changes: 3 additions & 3 deletions .github/workflows/check.yml
Original file line number Diff line number Diff line change
@@ -2,9 +2,9 @@ name: Check

on:
push:
branches: [ "0.9" ]
branches: [ main ]
pull_request:
branches: [ "0.9" ]
branches: [ main ]

env:
CARGO_TERM_COLOR: always
@@ -20,7 +20,7 @@ jobs:
steps:

- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v3

- name: Install Rust
uses: actions-rs/toolchain@v1
16 changes: 9 additions & 7 deletions Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
[package]
name = "hyper-staticfile"
version = "0.9.2"
version = "0.10.0"
authors = ["hyper-staticfile contributors"]
description = "Static file serving for Hyper 0.14"
description = "Static file serving for Hyper 1.0"
repository = "https://github.com/stephank/hyper-staticfile"
license = "MIT"
readme = "README.md"
@@ -13,20 +13,22 @@ edition = "2018"

[dependencies]
futures-util = "0.3.1"
http = "0.2.0"
http = "1.0.0"
httpdate = "1.0.1"
http-range = "0.1.4"
hyper = { version = "0.14.0", features = ["stream"] }
hyper = "1.0.0"
mime_guess = "2.0.1"
percent-encoding = "2.1.0"
rand = "0.8.4"
tokio = { version = "1.0.0", features = ["fs"] }
url = "2.1.0"

[dev-dependencies]
hyper = { version = "0.14.0", features = ["http1", "server", "tcp"] }
tempdir = "0.3.7"
tokio = { version = "1.0.0", features = ["macros", "rt-multi-thread"] }
hyper = { version = "1.0.0", features = ["http1", "server"] }
hyper-util = { version = "0.1.1", features = ["tokio"] }
http-body-util = "0.1.0"
tempfile = "3"
tokio = { version = "1.0.0", features = ["macros", "rt-multi-thread", "net", "io-util"] }

[target.'cfg(windows)'.dependencies]
winapi = { version = "0.3.6", features = ["winbase"] }
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -4,7 +4,7 @@
[![Crate](https://img.shields.io/crates/v/hyper-staticfile.svg)](https://crates.io/crates/hyper-staticfile)
[![Build Status](https://travis-ci.org/stephank/hyper-staticfile.svg?branch=master)](https://travis-ci.org/stephank/hyper-staticfile)

Static file-serving for [Hyper 0.14](https://github.com/hyperium/hyper).
Static file-serving for [Hyper 1.0](https://github.com/hyperium/hyper).

See [`examples/doc_server.rs`](examples/doc_server.rs) for a complete example that you can compile.

48 changes: 33 additions & 15 deletions examples/doc_server.rs
Original file line number Diff line number Diff line change
@@ -3,21 +3,24 @@
// Run `cargo doc && cargo run --example doc_server`, then
// point your browser to http://localhost:3000/

use futures_util::future;
use http::response::Builder as ResponseBuilder;
use http::{header, StatusCode};
use hyper::service::{make_service_fn, service_fn};
use hyper::{Body, Request, Response};
use hyper_staticfile::Static;
use std::io::Error as IoError;
use std::net::SocketAddr;
use std::path::Path;

use http::response::Builder as ResponseBuilder;
use http::{header, StatusCode};
use hyper::service::service_fn;
use hyper::{Request, Response};
use hyper_staticfile::{Body, Static};
use hyper_util::rt::TokioIo;
use tokio::net::TcpListener;

async fn handle_request<B>(req: Request<B>, static_: Static) -> Result<Response<Body>, IoError> {
if req.uri().path() == "/" {
let res = ResponseBuilder::new()
.status(StatusCode::MOVED_PERMANENTLY)
.header(header::LOCATION, "/hyper_staticfile/")
.body(Body::empty())
.body(Body::Empty)
.expect("unable to build response");
Ok(res)
} else {
@@ -29,13 +32,28 @@ async fn handle_request<B>(req: Request<B>, static_: Static) -> Result<Response<
async fn main() {
let static_ = Static::new(Path::new("target/doc/"));

let make_service = make_service_fn(|_| {
let static_ = static_.clone();
future::ok::<_, hyper::Error>(service_fn(move |req| handle_request(req, static_.clone())))
});

let addr = ([127, 0, 0, 1], 3000).into();
let server = hyper::Server::bind(&addr).serve(make_service);
let addr: SocketAddr = ([127, 0, 0, 1], 3000).into();
let listener = TcpListener::bind(addr)
.await
.expect("Failed to create TCP listener");
eprintln!("Doc server running on http://{}/", addr);
server.await.expect("Server failed");
loop {
let (stream, _) = listener
.accept()
.await
.expect("Failed to accept TCP connection");

let static_ = static_.clone();
tokio::spawn(async move {
if let Err(err) = hyper::server::conn::http1::Builder::new()
.serve_connection(
TokioIo::new(stream),
service_fn(move |req| handle_request(req, static_.clone())),
)
.await
{
eprintln!("Error serving connection: {:?}", err);
}
});
}
}
2 changes: 1 addition & 1 deletion rustfmt.toml
Original file line number Diff line number Diff line change
@@ -1 +1 @@
edition = "2021"
edition = "2018"
43 changes: 43 additions & 0 deletions src/body.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
use std::{
io::Error as IoError,
pin::Pin,
task::{ready, Context, Poll},
};

use futures_util::stream::Stream;
use hyper::body::{Bytes, Frame};

use crate::{
util::{FileBytesStream, FileBytesStreamMultiRange, FileBytesStreamRange},
vfs::{FileAccess, TokioFileAccess},
};

/// Hyper Body implementation for the various types of streams used in static serving.
pub enum Body<F = TokioFileAccess> {
/// No response body.
Empty,
/// Serve a complete file.
Full(FileBytesStream<F>),
/// Serve a range from a file.
Range(FileBytesStreamRange<F>),
/// Serve multiple ranges from a file.
MultiRange(FileBytesStreamMultiRange<F>),
}

impl<F: FileAccess> hyper::body::Body for Body<F> {
type Data = Bytes;
type Error = IoError;

fn poll_frame(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Option<Result<Frame<Bytes>, IoError>>> {
let opt = ready!(match *self {
Body::Empty => return Poll::Ready(None),
Body::Full(ref mut stream) => Pin::new(stream).poll_next(cx),
Body::Range(ref mut stream) => Pin::new(stream).poll_next(cx),
Body::MultiRange(ref mut stream) => Pin::new(stream).poll_next(cx),
});
Poll::Ready(opt.map(|res| res.map(Frame::data)))
}
}
36 changes: 15 additions & 21 deletions src/lib.rs
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#![crate_name = "hyper_staticfile"]
#![deny(missing_docs)]

//! Static file-serving for [Hyper 0.14](https://github.com/hyperium/hyper).
//! Static file-serving for [Hyper 1.0](https://github.com/hyperium/hyper).
//!
//! This library exports a high-level interface `Static` for simple file-serving, and lower-level
//! interfaces for more control over responses.
@@ -13,7 +13,7 @@
//! trait. It can be used as:
//!
//! ```rust
//! // Instance of `Static` containing configuration.
//! // Instance of `Static` containing configuration. Can be cheaply cloned.
//! let static_ = hyper_staticfile::Static::new("my/doc/root/");
//!
//! // A dummy request, but normally obtained from Hyper.
@@ -30,26 +30,24 @@
//!
//! ## Advanced usage
//!
//! The `Static` type is a simple wrapper for `resolve` and `ResponseBuilder`. You can achieve the
//! The `Static` type is a simple wrapper for `Resolver` and `ResponseBuilder`. You can achieve the
//! same by doing something similar to the following:
//!
//! ```rust
//! use std::path::Path;
//!
//! #[tokio::main]
//! async fn main() {
//! // Document root path.
//! let root = Path::new("my/doc/root/");
//! // Create a resolver. This can be cheaply cloned.
//! let resolver = hyper_staticfile::Resolver::new("my/doc/root/");
//!
//! // A dummy request, but normally obtained from Hyper.
//! let request = http::Request::get("/foo/bar.txt")
//! .body(())
//! .unwrap();
//!
//! // First, resolve the request. Returns a future for a `ResolveResult`.
//! let result = hyper_staticfile::resolve(&root, &request)
//! .await
//! .unwrap();
//! let result = resolver.resolve_request(&request).await.unwrap();
//!
//! // Then, build a response based on the result.
//! // The `ResponseBuilder` is typically a short-lived, per-request instance.
@@ -60,30 +58,26 @@
//! }
//! ```
//!
//! The `resolve` function tries to find the file in the root, and returns a future for the
//! `ResolveResult` enum, which determines what kind of response should be sent. The
//! The `resolve_request` method tries to find the file in the document root, and returns a future
//! for the `ResolveResult` enum, which determines what kind of response should be sent. The
//! `ResponseBuilder` is then used to create a default response. It holds some settings, and can be
//! constructed using the builder pattern.
//!
//! It's useful to sit between these two steps to implement custom 404 pages, for example. Your
//! custom logic can override specific cases of `ResolveResult`, and fall back to the default
//! behavior using `ResponseBuilder` if necessary.
//!
//! The `ResponseBuilder` in turn uses `FileResponseBuilder` to serve files that are found. The
//! `FileResponseBuilder` can also be used directly if you have an existing open `tokio::fs::File`
//! and want to serve it. It takes care of basic headers, 'not modified' responses, and streaming
//! the file in the body.
//!
//! Finally, there's `FileBytesStream`, which is used by `FileResponseBuilder` to stream the file.
//! This is a struct wrapping a `tokio::fs::File` and implementing a `futures::Stream` that
//! produces `Bytes`s. It can be used for streaming a file in custom response.
mod body;
mod resolve;
mod response_builder;
mod service;
mod util;

/// Lower level utilities.
pub mod util;
/// Types to implement a custom (virtual) filesystem to serve files from.
pub mod vfs;

pub use crate::body::Body;
pub use crate::resolve::*;
pub use crate::response_builder::*;
pub use crate::service::*;
pub use crate::util::{FileBytesStream, FileResponseBuilder};
445 changes: 365 additions & 80 deletions src/resolve.rs

Large diffs are not rendered by default.

36 changes: 19 additions & 17 deletions src/response_builder.rs
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
use crate::resolve::ResolveResult;
use crate::util::FileResponseBuilder;
use http::response::Builder as HttpResponseBuilder;
use http::{header, HeaderMap, Method, Request, Response, Result, StatusCode, Uri};
use hyper::Body;
use http::{
header, response::Builder as HttpResponseBuilder, HeaderMap, Method, Request, Response, Result,
StatusCode, Uri,
};

use crate::{resolve::ResolveResult, util::FileResponseBuilder, vfs::IntoFileAccess, Body};

/// Utility to build the default response for a `resolve` result.
///
@@ -72,20 +73,24 @@ impl<'a> ResponseBuilder<'a> {
///
/// This function may error if it response could not be constructed, but this should be a
/// seldom occurrence.
pub fn build(&self, result: ResolveResult) -> Result<Response<Body>> {
pub fn build<F: IntoFileAccess>(
&self,
result: ResolveResult<F>,
) -> Result<Response<Body<F::Output>>> {
match result {
ResolveResult::MethodNotMatched => HttpResponseBuilder::new()
.status(StatusCode::BAD_REQUEST)
.body(Body::empty()),
.body(Body::Empty),
ResolveResult::NotFound => HttpResponseBuilder::new()
.status(StatusCode::NOT_FOUND)
.body(Body::empty()),
.body(Body::Empty),
ResolveResult::PermissionDenied => HttpResponseBuilder::new()
.status(StatusCode::FORBIDDEN)
.body(Body::empty()),
ResolveResult::IsDirectory => {
let mut target = self.path.to_owned();
target.push('/');
.body(Body::Empty),
ResolveResult::IsDirectory {
redirect_to: mut target,
} => {
// Preserve any query string from the original request.
if let Some(query) = self.query {
target.push('?');
target.push_str(query);
@@ -94,12 +99,9 @@ impl<'a> ResponseBuilder<'a> {
HttpResponseBuilder::new()
.status(StatusCode::MOVED_PERMANENTLY)
.header(header::LOCATION, target)
.body(Body::empty())
}
ResolveResult::Found(file, metadata, mime) => {
self.file_response_builder
.build(file, metadata, mime.to_string())
.body(Body::Empty)
}
ResolveResult::Found(file) => self.file_response_builder.build(file),
}
}
}
90 changes: 59 additions & 31 deletions src/service.rs
Original file line number Diff line number Diff line change
@@ -1,42 +1,52 @@
use crate::{resolve, ResponseBuilder};
use std::{future::Future, io::Error as IoError, path::PathBuf, pin::Pin};

use http::{Request, Response};
use hyper::{service::Service, Body};
use std::future::Future;
use std::io::Error as IoError;
use std::path::PathBuf;
use std::pin::Pin;
use std::task::{Context, Poll};
use hyper::service::Service;

use crate::{
vfs::{FileOpener, IntoFileAccess, TokioFileOpener},
AcceptEncoding, Body, Resolver, ResponseBuilder,
};

/// High-level interface for serving static files.
///
/// This struct serves files from a single root path, which may be absolute or relative. The
/// request is mapped onto the filesystem by appending their URL path to the root path. If the
/// filesystem path corresponds to a regular file, the service will attempt to serve it. Otherwise,
/// if the path corresponds to a directory containing an `index.html`, the service will attempt to
/// serve that instead.
/// This services serves files based on the request path. The path is first sanitized, then mapped
/// to a file on the filesystem. If the path corresponds to a directory, it will try to look for a
/// directory index.
///
/// This struct allows direct access to its fields, but these fields are typically initialized by
/// the accessors, using the builder pattern. The fields are basically a bunch of settings that
/// determine the response details.
///
/// This struct also implements the `hyper::Service` trait, which simply wraps `Static::serve`.
/// Note that using the trait currently involves an extra `Box`.
#[derive(Clone)]
pub struct Static {
/// The root directory path to serve files from.
pub root: PathBuf,
///
/// Cloning this struct is a cheap operation.
pub struct Static<O = TokioFileOpener> {
/// The resolver instance used to open files.
pub resolver: Resolver<O>,
/// Whether to send cache headers, and what lifespan to indicate.
pub cache_headers: Option<u32>,
}

impl Static {
impl Static<TokioFileOpener> {
/// Create a new instance of `Static` with a given root path.
///
/// If `Path::new("")` is given, files will be served from the current directory.
/// The path may be absolute or relative. If `Path::new("")` is used, files will be served from
/// the current directory.
pub fn new(root: impl Into<PathBuf>) -> Self {
let root = root.into();
Static {
root,
Self {
resolver: Resolver::new(root),
cache_headers: None,
}
}
}

impl<O: FileOpener> Static<O> {
/// Create a new instance of `Static` with the given root directory.
pub fn with_opener(opener: O) -> Self {
Self {
resolver: Resolver::with_opener(opener),
cache_headers: None,
}
}
@@ -47,13 +57,22 @@ impl Static {
self
}

/// Set the encodings the client is allowed to request via the `Accept-Encoding` header.
pub fn allowed_encodings(&mut self, allowed_encodings: AcceptEncoding) -> &mut Self {
self.resolver.allowed_encodings = allowed_encodings;
self
}

/// Serve a request.
pub async fn serve<B>(self, request: Request<B>) -> Result<Response<Body>, IoError> {
pub async fn serve<B>(
self,
request: Request<B>,
) -> Result<Response<Body<<O::File as IntoFileAccess>::Output>>, IoError> {
let Self {
root,
resolver,
cache_headers,
} = self;
resolve(root, &request).await.map(|result| {
resolver.resolve_request(&request).await.map(|result| {
ResponseBuilder::new()
.request(&request)
.cache_headers(cache_headers)
@@ -63,16 +82,25 @@ impl Static {
}
}

impl<B: Send + Sync + 'static> Service<Request<B>> for Static {
type Response = Response<Body>;
impl<O> Clone for Static<O> {
fn clone(&self) -> Self {
Self {
resolver: self.resolver.clone(),
cache_headers: self.cache_headers,
}
}
}

impl<O, B> Service<Request<B>> for Static<O>
where
O: FileOpener,
B: Send + Sync + 'static,
{
type Response = Response<Body<<O::File as IntoFileAccess>::Output>>;
type Error = IoError;
type Future = Pin<Box<dyn Future<Output = Result<Self::Response, Self::Error>> + Send>>;

fn poll_ready(&mut self, _cx: &mut Context) -> Poll<Result<(), Self::Error>> {
Poll::Ready(Ok(()))
}

fn call(&mut self, request: Request<B>) -> Self::Future {
fn call(&self, request: Request<B>) -> Self::Future {
Box::pin(self.clone().serve(request))
}
}
209 changes: 70 additions & 139 deletions src/util/file_bytes_stream.rs
Original file line number Diff line number Diff line change
@@ -1,66 +1,57 @@
use std::{
fmt::Write,
io::{Error as IoError, SeekFrom},
pin::Pin,
task::{Context, Poll},
vec,
};

use futures_util::stream::Stream;
use http_range::HttpRange;
use hyper::body::{Body, Bytes};
use std::cmp::min;
use std::io::{Cursor, Error as IoError, SeekFrom, Write};
use std::mem::MaybeUninit;
use std::pin::Pin;
use std::task::{Context, Poll};
use std::vec;
use tokio::fs::File;
use tokio::io::{AsyncRead, AsyncSeek, ReadBuf};

const BUF_SIZE: usize = 8 * 1024;

/// Wraps a `tokio::fs::File`, and implements a stream of `Bytes`s.
pub struct FileBytesStream {
file: File,
buf: Box<[MaybeUninit<u8>; BUF_SIZE]>,
use hyper::body::Bytes;

use crate::vfs::{FileAccess, TokioFileAccess};

/// Wraps a `FileAccess` and implements a stream of `Bytes`s.
pub struct FileBytesStream<F = TokioFileAccess> {
file: F,
remaining: u64,
}

impl FileBytesStream {
impl<F> FileBytesStream<F> {
/// Create a new stream from the given file.
pub fn new(file: File) -> FileBytesStream {
let buf = Box::new([MaybeUninit::uninit(); BUF_SIZE]);
FileBytesStream {
pub fn new(file: F) -> Self {
Self {
file,
buf,
remaining: u64::MAX,
}
}

/// Create a new stream from the given file, reading up to `limit` bytes.
pub fn new_with_limit(file: File, limit: u64) -> FileBytesStream {
let buf = Box::new([MaybeUninit::uninit(); BUF_SIZE]);
FileBytesStream {
pub fn new_with_limit(file: F, limit: u64) -> Self {
Self {
file,
buf,
remaining: limit,
}
}
}

impl Stream for FileBytesStream {
impl<F: FileAccess> Stream for FileBytesStream<F> {
type Item = Result<Bytes, IoError>;

fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context) -> Poll<Option<Self::Item>> {
let Self {
ref mut file,
ref mut buf,
ref mut remaining,
} = *self;

let max_read_length = min(*remaining, buf.len() as u64) as usize;
let mut read_buf = ReadBuf::uninit(&mut buf[..max_read_length]);
match Pin::new(file).poll_read(cx, &mut read_buf) {
Poll::Ready(Ok(())) => {
let filled = read_buf.filled();
*remaining -= filled.len() as u64;
if filled.is_empty() {
match Pin::new(file).poll_read(cx, *remaining as usize) {
Poll::Ready(Ok(buf)) => {
*remaining -= buf.len() as u64;
if buf.is_empty() {
Poll::Ready(None)
} else {
Poll::Ready(Some(Ok(Bytes::copy_from_slice(filled))))
Poll::Ready(Some(Ok(buf)))
}
}
Poll::Ready(Err(e)) => Poll::Ready(Some(Err(e))),
@@ -69,48 +60,40 @@ impl Stream for FileBytesStream {
}
}

impl FileBytesStream {
/// Create a Hyper `Body` from this stream.
pub fn into_body(self) -> Body {
Body::wrap_stream(self)
}
}

#[derive(PartialEq, Eq)]
enum FileSeekState {
NeedSeek,
Seeking,
Reading,
}

/// Wraps a `tokio::fs::File`, and implements a stream of `Bytes`s reading a portion of the
/// file given by `range`.
pub struct FileBytesStreamRange {
file_stream: FileBytesStream,
/// Wraps a `FileAccess` and implements a stream of `Bytes`s reading a portion of the file.
pub struct FileBytesStreamRange<F = TokioFileAccess> {
file_stream: FileBytesStream<F>,
seek_state: FileSeekState,
start_offset: u64,
}

impl FileBytesStreamRange {
/// Create a new stream from the given file and range
pub fn new(file: File, range: HttpRange) -> FileBytesStreamRange {
FileBytesStreamRange {
impl<F> FileBytesStreamRange<F> {
/// Create a new stream from the given file and range.
pub fn new(file: F, range: HttpRange) -> Self {
Self {
file_stream: FileBytesStream::new_with_limit(file, range.length),
seek_state: FileSeekState::NeedSeek,
start_offset: range.start,
}
}

fn without_initial_range(file: File) -> FileBytesStreamRange {
FileBytesStreamRange {
fn without_initial_range(file: F) -> Self {
Self {
file_stream: FileBytesStream::new_with_limit(file, 0),
seek_state: FileSeekState::NeedSeek,
start_offset: 0,
}
}
}

impl Stream for FileBytesStreamRange {
impl<F: FileAccess> Stream for FileBytesStreamRange<F> {
type Item = Result<Bytes, IoError>;

fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context) -> Poll<Option<Self::Item>> {
@@ -138,19 +121,10 @@ impl Stream for FileBytesStreamRange {
}
}

impl FileBytesStreamRange {
/// Create a Hyper `Body` from this stream.
pub fn into_body(self) -> Body {
Body::wrap_stream(self)
}
}

/// Wraps a `tokio::fs::File`, and implements a stream of `Bytes`s reading multiple portions of
/// the file given by `ranges` using a chunked multipart/byteranges response. A boundary is
/// required to separate the chunked components and therefore needs to be unlikely to be in any
/// file.
pub struct FileBytesStreamMultiRange {
file_range: FileBytesStreamRange,
/// Wraps a `FileAccess` and implements a stream of `Bytes`s providing multiple ranges of the file
/// contents in HTTP chunked transfer encoding.
pub struct FileBytesStreamMultiRange<F = TokioFileAccess> {
file_range: FileBytesStreamRange<F>,
range_iter: vec::IntoIter<HttpRange>,
is_first_boundary: bool,
completed: bool,
@@ -159,15 +133,13 @@ pub struct FileBytesStreamMultiRange {
file_length: u64,
}

impl FileBytesStreamMultiRange {
impl<F> FileBytesStreamMultiRange<F> {
/// Create a new stream from the given file, ranges, boundary and file length.
pub fn new(
file: File,
ranges: Vec<HttpRange>,
boundary: String,
file_length: u64,
) -> FileBytesStreamMultiRange {
FileBytesStreamMultiRange {
///
/// A boundary is required to separate the chunked components and therefore needs to be
/// unlikely to be in any file.
pub fn new(file: F, ranges: Vec<HttpRange>, boundary: String, file_length: u64) -> Self {
Self {
file_range: FileBytesStreamRange::without_initial_range(file),
range_iter: ranges.into_iter(),
boundary,
@@ -184,11 +156,9 @@ impl FileBytesStreamMultiRange {
}

/// Computes the length of the body for the multi-range response being produced by this
/// `FileBytesStreamMultiRange`. This function is required to be mutable because it temporarily
/// uses pre-allocated buffers.
pub fn compute_length(&mut self) -> u64 {
/// `FileBytesStreamMultiRange`.
pub fn compute_length(&self) -> u64 {
let Self {
ref mut file_range,
ref range_iter,
ref boundary,
ref content_type,
@@ -199,76 +169,53 @@ impl FileBytesStreamMultiRange {
let mut total_length = 0;
let mut is_first = true;
for range in range_iter.as_slice() {
let mut read_buf = ReadBuf::uninit(&mut file_range.file_stream.buf[..]);
render_multipart_header(
&mut read_buf,
boundary,
content_type,
*range,
is_first,
file_length,
);
let header =
render_multipart_header(boundary, content_type, *range, is_first, file_length);

is_first = false;
total_length += read_buf.filled().len() as u64;
total_length += header.as_bytes().len() as u64;
total_length += range.length;
}

let mut read_buf = ReadBuf::uninit(&mut file_range.file_stream.buf[..]);
render_multipart_header_end(&mut read_buf, boundary);
total_length += read_buf.filled().len() as u64;
let header = render_multipart_header_end(boundary);
total_length += header.as_bytes().len() as u64;

total_length
}
}

fn render_multipart_header(
read_buf: &mut ReadBuf<'_>,
boundary: &str,
content_type: &str,
range: HttpRange,
is_first: bool,
file_length: u64,
) {
) -> String {
let mut buf = String::with_capacity(128);
if !is_first {
read_buf.put_slice(b"\r\n");
buf.push_str("\r\n");
}
read_buf.put_slice(b"--");
read_buf.put_slice(boundary.as_bytes());
read_buf.put_slice(b"\r\nContent-Range: bytes ");

// 64 is 20 (max length of 64 bit integer) * 3 + 4 (symbols, new line)
let mut tmp_buffer = [0; 64];
let mut tmp_storage = Cursor::new(&mut tmp_buffer[..]);
write!(
&mut tmp_storage,
"{}-{}/{}\r\n",
&mut buf,
"--{boundary}\r\nContent-Range: bytes {}-{}/{file_length}\r\n",
range.start,
range.start + range.length - 1,
file_length,
)
.expect("buffer unexpectedly too small");

let end_position = tmp_storage.position() as usize;
let tmp_storage = tmp_storage.into_inner();
read_buf.put_slice(&tmp_storage[..end_position]);
.expect("buffer write failed");

if !content_type.is_empty() {
read_buf.put_slice(b"Content-Type: ");
read_buf.put_slice(content_type.as_bytes());
read_buf.put_slice(b"\r\n");
write!(&mut buf, "Content-Type: {content_type}\r\n").expect("buffer write failed");
}

read_buf.put_slice(b"\r\n");
buf.push_str("\r\n");
buf
}

fn render_multipart_header_end(read_buf: &mut ReadBuf<'_>, boundary: &str) {
read_buf.put_slice(b"\r\n--");
read_buf.put_slice(boundary.as_bytes());
read_buf.put_slice(b"--\r\n");
fn render_multipart_header_end(boundary: &str) -> String {
format!("\r\n--{boundary}--\r\n")
}

impl Stream for FileBytesStreamMultiRange {
impl<F: FileAccess> Stream for FileBytesStreamMultiRange<F> {
type Item = Result<Bytes, IoError>;

fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context) -> Poll<Option<Self::Item>> {
@@ -292,9 +239,8 @@ impl Stream for FileBytesStreamMultiRange {
None => {
*completed = true;

let mut read_buf = ReadBuf::uninit(&mut file_range.file_stream.buf[..]);
render_multipart_header_end(&mut read_buf, boundary);
return Poll::Ready(Some(Ok(Bytes::copy_from_slice(read_buf.filled()))));
let header = render_multipart_header_end(boundary);
return Poll::Ready(Some(Ok(header.into())));
}
};

@@ -305,26 +251,11 @@ impl Stream for FileBytesStreamMultiRange {
let cur_is_first = *is_first_boundary;
*is_first_boundary = false;

let mut read_buf = ReadBuf::uninit(&mut file_range.file_stream.buf[..]);
render_multipart_header(
&mut read_buf,
boundary,
content_type,
range,
cur_is_first,
file_length,
);

return Poll::Ready(Some(Ok(Bytes::copy_from_slice(read_buf.filled()))));
let header =
render_multipart_header(boundary, content_type, range, cur_is_first, file_length);
return Poll::Ready(Some(Ok(header.into())));
}

Pin::new(file_range).poll_next(cx)
}
}

impl FileBytesStreamMultiRange {
/// Create a Hyper `Body` from this stream.
pub fn into_body(self) -> Body {
Body::wrap_stream(self)
}
}
81 changes: 47 additions & 34 deletions src/util/file_response_builder.rs
Original file line number Diff line number Diff line change
@@ -1,13 +1,17 @@
use super::{FileBytesStream, FileBytesStreamMultiRange, FileBytesStreamRange};
use http::response::Builder as ResponseBuilder;
use http::{header, HeaderMap, Method, Request, Response, Result, StatusCode};
use http_range::HttpRange;
use http_range::HttpRangeParseError;
use hyper::Body;
use rand::prelude::{thread_rng, SliceRandom};
use std::fs::Metadata;
use std::time::{Duration, SystemTime, UNIX_EPOCH};
use tokio::fs::File;

use http::{
header, response::Builder as ResponseBuilder, HeaderMap, Method, Request, Response, Result,
StatusCode,
};
use http_range::{HttpRange, HttpRangeParseError};
use rand::prelude::{thread_rng, SliceRandom};

use crate::{
util::{FileBytesStream, FileBytesStreamMultiRange, FileBytesStreamRange},
vfs::IntoFileAccess,
Body, ResolvedFile,
};

/// Minimum duration since Unix epoch we accept for file modification time.
///
@@ -19,7 +23,7 @@ const MIN_VALID_MTIME: Duration = Duration::from_secs(2);
const BOUNDARY_LENGTH: usize = 60;
const BOUNDARY_CHARS: &[u8] = b"0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz";

/// Utility to build responses for serving a `tokio::fs::File`.
/// Utility to build responses for serving a file.
///
/// This struct allows direct access to its fields, but these fields are typically initialized by
/// the accessors, using the builder pattern. The fields are basically a bunch of settings that
@@ -110,17 +114,15 @@ impl FileResponseBuilder {
self
}

/// Build a response for the given file and metadata.
pub fn build(
/// Build a response for the given resolved file.
pub fn build<F: IntoFileAccess>(
&self,
file: File,
metadata: Metadata,
content_type: String,
) -> Result<Response<Body>> {
file: ResolvedFile<F>,
) -> Result<Response<Body<F::Output>>> {
let mut res = ResponseBuilder::new();

// Set `Last-Modified` and check `If-Modified-Since`.
let modified = metadata.modified().ok().filter(|v| {
let modified = file.modified.filter(|v| {
v.duration_since(UNIX_EPOCH)
.ok()
.filter(|v| v >= &MIN_VALID_MTIME)
@@ -140,13 +142,13 @@ impl FileResponseBuilder {
if modified_unix.as_secs() <= ims_unix.as_secs() {
return ResponseBuilder::new()
.status(StatusCode::NOT_MODIFIED)
.body(Body::empty());
.body(Body::Empty);
}
}

let etag = format!(
"W/\"{0:x}-{1:x}.{2:x}\"",
metadata.len(),
file.size,
modified_unix.as_secs(),
modified_unix.subsec_nanos()
);
@@ -180,12 +182,12 @@ impl FileResponseBuilder {
}

if self.is_head {
res = res.header(header::CONTENT_LENGTH, format!("{}", metadata.len()));
return res.status(StatusCode::OK).body(Body::empty());
res = res.header(header::CONTENT_LENGTH, format!("{}", file.size));
return res.status(StatusCode::OK).body(Body::Empty);
}

let ranges = self.range.as_ref().filter(|_| range_cond_ok).and_then(|r| {
match HttpRange::parse(r, metadata.len()) {
match HttpRange::parse(r, file.size) {
Ok(r) => Some(Ok(r)),
Err(HttpRangeParseError::NoOverlap) => Some(Err(())),
Err(HttpRangeParseError::InvalidRange) => None,
@@ -198,7 +200,7 @@ impl FileResponseBuilder {
Err(()) => {
return res
.status(StatusCode::RANGE_NOT_SATISFIABLE)
.body(Body::empty());
.body(Body::Empty);
}
};

@@ -207,14 +209,15 @@ impl FileResponseBuilder {
res = res
.header(
header::CONTENT_RANGE,
content_range_header(&single_span, metadata.len()),
content_range_header(&single_span, file.size),
)
.header(header::CONTENT_LENGTH, format!("{}", single_span.length));

let body_stream = FileBytesStreamRange::new(file, single_span);
let body_stream =
FileBytesStreamRange::new(file.handle.into_file_access(), single_span);
return res
.status(StatusCode::PARTIAL_CONTENT)
.body(body_stream.into_body());
.body(Body::Range(body_stream));
} else if ranges.len() > 1 {
let mut boundary_tmp = [0u8; BOUNDARY_LENGTH];

@@ -232,10 +235,14 @@ impl FileResponseBuilder {
format!("multipart/byteranges; boundary={}", boundary),
);

let mut body_stream =
FileBytesStreamMultiRange::new(file, ranges, boundary, metadata.len());
if !content_type.is_empty() {
body_stream.set_content_type(&content_type);
let mut body_stream = FileBytesStreamMultiRange::new(
file.handle.into_file_access(),
ranges,
boundary,
file.size,
);
if let Some(content_type) = file.content_type.as_ref() {
body_stream.set_content_type(content_type);
}

res = res.header(
@@ -245,18 +252,24 @@ impl FileResponseBuilder {

return res
.status(StatusCode::PARTIAL_CONTENT)
.body(body_stream.into_body());
.body(Body::MultiRange(body_stream));
}
}

res = res.header(header::CONTENT_LENGTH, format!("{}", metadata.len()));
if !content_type.is_empty() {
res = res.header(header::CONTENT_LENGTH, format!("{}", file.size));
if let Some(content_type) = file.content_type {
res = res.header(header::CONTENT_TYPE, content_type);
}
if let Some(encoding) = file.encoding {
res = res.header(header::CONTENT_ENCODING, encoding.to_header_value());
}

// Stream the body.
res.status(StatusCode::OK)
.body(FileBytesStream::new_with_limit(file, metadata.len()).into_body())
.body(Body::Full(FileBytesStream::new_with_limit(
file.handle.into_file_access(),
file.size,
)))
}
}

5 changes: 2 additions & 3 deletions src/util/mod.rs
Original file line number Diff line number Diff line change
@@ -1,9 +1,8 @@
mod file_bytes_stream;
mod file_response_builder;
mod open_with_metadata;
mod requested_path;

pub use self::file_bytes_stream::*;
pub use self::file_response_builder::*;
pub use self::open_with_metadata::*;
pub use self::requested_path::*;

pub(crate) use self::requested_path::*;
23 changes: 0 additions & 23 deletions src/util/open_with_metadata.rs

This file was deleted.

20 changes: 8 additions & 12 deletions src/util/requested_path.rs
Original file line number Diff line number Diff line change
@@ -7,7 +7,7 @@ fn decode_percents(string: &str) -> String {
.into_owned()
}

fn normalize_path(path: &Path) -> PathBuf {
fn sanitize_path(path: &Path) -> PathBuf {
path.components()
.fold(PathBuf::new(), |mut result, p| match p {
Component::Normal(x) => {
@@ -29,25 +29,21 @@ fn normalize_path(path: &Path) -> PathBuf {
})
}

/// Resolved request path.
/// Processed request path.
pub struct RequestedPath {
/// Fully resolved filesystem path of the request.
pub full_path: PathBuf,
/// Whether a directory was requested. (`original` ends with a slash.)
/// Sanitized path of the request.
pub sanitized: PathBuf,
/// Whether a directory was requested. (The input ended with a slash.)
pub is_dir_request: bool,
}

impl RequestedPath {
/// Resolve the requested path to a full filesystem path, limited to the root.
pub fn resolve(root_path: impl Into<PathBuf>, request_path: &str) -> Self {
/// Process a request path.
pub fn resolve(request_path: &str) -> Self {
let is_dir_request = request_path.as_bytes().last() == Some(&b'/');
let request_path = PathBuf::from(decode_percents(request_path));

let mut full_path = root_path.into();
full_path.extend(&normalize_path(&request_path));

RequestedPath {
full_path,
sanitized: sanitize_path(&request_path),
is_dir_request,
}
}
395 changes: 395 additions & 0 deletions src/vfs.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,395 @@
use std::{
cmp::min,
collections::HashMap,
fs::OpenOptions,
future::Future,
io::{Cursor, Error, ErrorKind},
mem::MaybeUninit,
path::{Component, Path, PathBuf},
pin::Pin,
task::{Context, Poll},
time::SystemTime,
};

use futures_util::future::{ready, Ready};
use hyper::body::Bytes;
use tokio::{
fs::{self, File},
io::{AsyncRead, AsyncSeek, ReadBuf},
task::{spawn_blocking, JoinHandle},
};

#[cfg(windows)]
use std::os::windows::fs::OpenOptionsExt;
#[cfg(windows)]
use winapi::um::winbase::FILE_FLAG_BACKUP_SEMANTICS;

const TOKIO_READ_BUF_SIZE: usize = 8 * 1024;

/// Open file handle with metadata.
///
/// This struct exists because we want to abstract away tokio `File`, but need to use
/// `File`-specific operations to find the metadata and fill the additional fields here.
///
/// This struct is eventually converted to a `ResolvedFile`.
#[derive(Debug)]
pub struct FileWithMetadata<F = File> {
/// Open file handle.
pub handle: F,
/// Size in bytes.
pub size: u64,
/// Last modification time.
pub modified: Option<SystemTime>,
/// Whether this is a directory.
pub is_dir: bool,
}

/// Trait for a simple virtual filesystem layer.
///
/// There is only the `open` operation, hence the name `FileOpener`. In practice, `open` must also
/// collect some file metadata. (See the `FileWithMetadata` struct.)
pub trait FileOpener: Send + Sync + 'static {
/// File handle type.
type File: IntoFileAccess;

/// Future type that `open` returns.
type Future: Future<Output = Result<FileWithMetadata<Self::File>, Error>> + Send;

/// Open a file and return a `FileWithMetadata`.
///
/// It can be assumed the path is already sanitized at this point.
fn open(&self, path: &Path) -> Self::Future;
}

/// Trait that converts a file handle into something that implements `FileAccess`.
///
/// This trait is called when streaming starts, and exists as a separate step so that buffer
/// allocation doesn't have to happen until that point.
pub trait IntoFileAccess: Send + Unpin + 'static {
/// Target type that implements `FileAccess`.
type Output: FileAccess;

/// Convert into a type that implements `FileAccess`.
fn into_file_access(self) -> Self::Output;
}

/// Trait that implements all the necessary file access methods used for serving files.
///
/// This trait exists as an alternative to `AsyncRead` that returns a `Bytes` directly, potentially
/// eliminating a copy. Unlike `AsyncRead`, this does mean the implementation is responsible for
/// providing the read buffer.
pub trait FileAccess: AsyncSeek + Send + Unpin + 'static {
/// Attempts to read from the file.
///
/// If no data is available for reading, the method returns `Poll::Pending` and arranges for
/// the current task (via `cx.waker()`) to receive a notification when the object becomes
/// readable or is closed.
///
/// An empty `Bytes` return value indicates EOF.
fn poll_read(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
len: usize,
) -> Poll<Result<Bytes, Error>>;
}

//
// Tokio File implementation
//

impl IntoFileAccess for File {
type Output = TokioFileAccess;

fn into_file_access(self) -> Self::Output {
TokioFileAccess::new(self)
}
}

/// Struct that wraps a tokio `File` to implement `FileAccess`.
pub struct TokioFileAccess {
file: File,
read_buf: Box<[MaybeUninit<u8>; TOKIO_READ_BUF_SIZE]>,
}

impl TokioFileAccess {
/// Create a new `TokioFileAccess` for a `File`.
pub fn new(file: File) -> Self {
TokioFileAccess {
file,
read_buf: Box::new([MaybeUninit::uninit(); TOKIO_READ_BUF_SIZE]),
}
}
}

impl AsyncSeek for TokioFileAccess {
fn start_seek(mut self: Pin<&mut Self>, position: std::io::SeekFrom) -> std::io::Result<()> {
Pin::new(&mut self.file).start_seek(position)
}

fn poll_complete(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<std::io::Result<u64>> {
Pin::new(&mut self.file).poll_complete(cx)
}
}

impl FileAccess for TokioFileAccess {
fn poll_read(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
len: usize,
) -> Poll<Result<Bytes, Error>> {
let Self {
ref mut file,
ref mut read_buf,
} = *self;

let len = min(len, read_buf.len());
let mut read_buf = ReadBuf::uninit(&mut read_buf[..len]);
match Pin::new(file).poll_read(cx, &mut read_buf) {
Poll::Ready(Ok(())) => {
let filled = read_buf.filled();
if filled.is_empty() {
Poll::Ready(Ok(Bytes::new()))
} else {
Poll::Ready(Ok(Bytes::copy_from_slice(filled)))
}
}
Poll::Ready(Err(e)) => Poll::Ready(Err(e)),
Poll::Pending => Poll::Pending,
}
}
}

/// Filesystem implementation that uses `tokio::fs`.
pub struct TokioFileOpener {
/// The virtual root directory to use when opening files.
///
/// The path may be absolute or relative.
pub root: PathBuf,
}

impl TokioFileOpener {
/// Create a new `TokioFileOpener` for the given root path.
///
/// The path may be absolute or relative.
pub fn new(root: impl Into<PathBuf>) -> Self {
Self { root: root.into() }
}
}

impl FileOpener for TokioFileOpener {
type File = File;
type Future = TokioFileFuture;

fn open(&self, path: &Path) -> Self::Future {
let mut full_path = self.root.clone();
full_path.extend(path);

// Small perf gain: we do open + metadata in one go. If we used the tokio async functions
// here, that'd amount to two `spawn_blocking` calls behind the scenes.
let inner = spawn_blocking(move || {
let mut opts = OpenOptions::new();
opts.read(true);

// On Windows, we need to set this flag to be able to open directories.
#[cfg(windows)]
opts.custom_flags(FILE_FLAG_BACKUP_SEMANTICS);

let handle = opts.open(full_path)?;
let metadata = handle.metadata()?;
Ok(FileWithMetadata {
handle: File::from_std(handle),
size: metadata.len(),
modified: metadata.modified().ok(),
is_dir: metadata.is_dir(),
})
});

TokioFileFuture { inner }
}
}

/// Future type produced by `TokioFileOpener`.
///
/// This type mostly exists just to prevent a `Box<dyn Future>`.
pub struct TokioFileFuture {
inner: JoinHandle<Result<FileWithMetadata<File>, Error>>,
}

impl Future for TokioFileFuture {
type Output = Result<FileWithMetadata<File>, Error>;

fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
// The task produces a result, but so does the `JoinHandle`, so this is a
// `Result<Result<..>>`. We map the `JoinHandle` error to an IO error, so that we can
// flatten the results. This is similar to what tokio does, but that just uses `Map` and
// async functions (with an anonymous future type).
match Pin::new(&mut self.inner).poll(cx) {
Poll::Ready(Ok(res)) => Poll::Ready(res),
Poll::Ready(Err(_)) => {
Poll::Ready(Err(Error::new(ErrorKind::Other, "background task failed")))
}
Poll::Pending => Poll::Pending,
}
}
}

//
// In-memory implementation
//

type MemoryFileMap = HashMap<PathBuf, FileWithMetadata<Bytes>>;

impl IntoFileAccess for Cursor<Bytes> {
type Output = Self;

fn into_file_access(self) -> Self::Output {
// No read buffer required. We can simply create subslices.
self
}
}

impl FileAccess for Cursor<Bytes> {
fn poll_read(
self: Pin<&mut Self>,
_cx: &mut Context<'_>,
len: usize,
) -> Poll<Result<Bytes, Error>> {
let pos = self.position();
let slice = (*self).get_ref();

// The position could technically be out of bounds, so don't panic...
if pos > slice.len() as u64 {
return Poll::Ready(Ok(Bytes::new()));
}

let start = pos as usize;
let amt = min(slice.len() - start, len);
// Add won't overflow because of pos check above.
let end = start + amt;
Poll::Ready(Ok(slice.slice(start..end)))
}
}

/// An in-memory virtual filesystem.
///
/// This type implements `FileOpener`, and can be directly used in `Static::with_opener`, for example.
pub struct MemoryFs {
files: MemoryFileMap,
}

impl Default for MemoryFs {
fn default() -> Self {
let mut files = MemoryFileMap::new();

// Create a top-level directory entry.
files.insert(
PathBuf::new(),
FileWithMetadata {
handle: Bytes::new(),
size: 0,
modified: None,
is_dir: true,
},
);

Self { files }
}
}

impl MemoryFs {
/// Initialize a `MemoryFs` from a directory.
///
/// This loads all files and their contents into memory. Symlinks are followed.
pub async fn from_dir(path: impl AsRef<Path>) -> Result<Self, Error> {
let mut fs = Self::default();

// Pending directories to scan, as: `(real path, virtual path)`
let mut dirs = vec![(path.as_ref().to_path_buf(), PathBuf::new())];
while let Some((dir, base)) = dirs.pop() {
let mut iter = fs::read_dir(dir).await?;
while let Some(entry) = iter.next_entry().await? {
let metadata = entry.metadata().await?;

// Build the virtual path.
let mut out_path = base.to_path_buf();
out_path.push(entry.file_name());

if metadata.is_dir() {
// Add to pending stack,
dirs.push((entry.path(), out_path));
} else if metadata.is_file() {
// Read file contents and create an entry.
let data = fs::read(entry.path()).await?;
fs.add(out_path, data.into(), metadata.modified().ok());
}
}
}

Ok(fs)
}

/// Add a file to the `MemoryFs`.
///
/// This automatically creates directory entries leading up to the path. Any existing entries
/// are overwritten.
pub fn add(
&mut self,
path: impl Into<PathBuf>,
data: Bytes,
modified: Option<SystemTime>,
) -> &mut Self {
let path = path.into();

// Create directory entries.
let mut components: Vec<_> = path.components().collect();
components.pop();
let mut dir_path = PathBuf::new();
for component in components {
if let Component::Normal(x) = component {
dir_path.push(x);
self.files.insert(
dir_path.clone(),
FileWithMetadata {
handle: Bytes::new(),
size: 0,
modified: None,
is_dir: true,
},
);
}
}

// Create the file entry.
let size = data.len() as u64;
self.files.insert(
path,
FileWithMetadata {
handle: data,
size,
modified,
is_dir: false,
},
);

self
}
}

impl FileOpener for MemoryFs {
type File = Cursor<Bytes>;
type Future = Ready<Result<FileWithMetadata<Self::File>, Error>>;

fn open(&self, path: &Path) -> Self::Future {
ready(
self.files
.get(path)
.map(|file| FileWithMetadata {
handle: Cursor::new(file.handle.clone()),
size: file.size,
modified: file.modified,
is_dir: file.is_dir,
})
.ok_or_else(|| Error::new(ErrorKind::NotFound, "Not found")),
)
}
}
21 changes: 8 additions & 13 deletions tests/hyper.rs
Original file line number Diff line number Diff line change
@@ -1,24 +1,19 @@
use futures_util::future;
use hyper::service::make_service_fn;
use hyper_staticfile::Static;
use std::path::Path;

use hyper_staticfile::Static;

use hyper_util::rt::TokioIo;

// This test currently only demonstrates that a `Static` instance can be used
// as a hyper service directly.
#[tokio::test]
async fn test_usable_as_hyper_service() {
let static_ = Static::new(Path::new("target/doc/"));

let make_service = make_service_fn(|_| {
let static_ = static_.clone();
future::ok::<_, hyper::Error>(static_)
});

// Bind to port "0" to allow the OS to pick one that's free, avoiding
// the risk of collisions.
let addr = ([127, 0, 0, 1], 0).into();
let server = hyper::server::Server::bind(&addr).serve(make_service);
let (stream, _) = tokio::io::duplex(2);
let fut =
hyper::server::conn::http1::Builder::new().serve_connection(TokioIo::new(stream), static_);

// It's enough to show that this builds, so no need to execute anything.
drop(server);
drop(fut);
}
167 changes: 135 additions & 32 deletions tests/static.rs
Original file line number Diff line number Diff line change
@@ -1,15 +1,23 @@
use futures_util::stream::StreamExt;
use std::{
fs,
future::Future,
io::{Cursor, Error as IoError, Read, Write},
process::Command,
str,
time::{Duration, SystemTime},
};

use http::{header, Request, StatusCode};
use http_body_util::BodyExt;
use httpdate::fmt_http_date;
use hyper_staticfile::Static;
use std::future::Future;
use std::io::{Cursor, Error as IoError, Write};
use std::process::Command;
use std::time::{Duration, SystemTime};
use std::{fs, str};
use tempdir::TempDir;

type Response = hyper::Response<hyper::Body>;
use hyper::body::Buf;
use hyper_staticfile::{
vfs::{FileAccess, MemoryFs},
AcceptEncoding, Body, Encoding, Static,
};
use tempfile::TempDir;

type Response = hyper::Response<Body>;
type ResponseResult = Result<Response, IoError>;

struct Harness {
@@ -18,19 +26,26 @@ struct Harness {
}
impl Harness {
fn new(files: Vec<(&str, &str)>) -> Harness {
let dir = TempDir::new("hyper-staticfile-tests").unwrap();
let dir = Self::create_temp_dir(files);

let mut static_ = Static::new(dir.path());
static_
.cache_headers(Some(3600))
.allowed_encodings(AcceptEncoding::all());

Harness { dir, static_ }
}

fn create_temp_dir(files: Vec<(&str, &str)>) -> TempDir {
let dir = TempDir::new().unwrap();
for (subpath, contents) in files {
let fullpath = dir.path().join(subpath);
fs::create_dir_all(fullpath.parent().unwrap())
.and_then(|_| fs::File::create(fullpath))
.and_then(|mut file| file.write_all(contents.as_bytes()))
.expect("failed to write fixtures");
}

let mut static_ = Static::new(dir.path().clone());
static_.cache_headers(Some(3600));

Harness { dir, static_ }
dir
}

fn append(&self, subpath: &str, content: &str) {
@@ -56,17 +71,17 @@ impl Harness {
}
}

async fn read_body(req: Response) -> String {
let mut buf = vec![];
let mut body = req.into_body();
loop {
match body.next().await {
None => break,
Some(Err(err)) => panic!("{:?}", err),
Some(Ok(chunk)) => buf.extend_from_slice(&chunk[..]),
}
}
String::from_utf8(buf).unwrap()
async fn read_body<F: FileAccess>(res: hyper::Response<Body<F>>) -> String {
let mut body = String::new();
res.into_body()
.collect()
.await
.unwrap()
.aggregate()
.reader()
.read_to_string(&mut body)
.unwrap();
body
}

#[tokio::test]
@@ -103,13 +118,32 @@ async fn returns_404_if_file_not_found() {

#[tokio::test]
async fn redirects_if_trailing_slash_is_missing() {
let harness = Harness::new(vec![("dir/index.html", "this is index")]);
let harness = Harness::new(vec![("foo/bar/index.html", "this is index")]);

let res = harness.get("/dir").await.unwrap();
let res = harness.get("/foo/bar").await.unwrap();
assert_eq!(res.status(), StatusCode::MOVED_PERMANENTLY);

let url = res.headers().get(header::LOCATION).unwrap();
assert_eq!(url, "/dir/");
assert_eq!(url, "/foo/bar/");
}

#[tokio::test]
async fn redirects_to_sanitized_path() {
let harness = Harness::new(vec![("foo.org/bar/index.html", "this is index")]);

// Previous versions would base the redirect on the request path, but that is user input, and
// the user could construct a schema-relative redirect this way.
let res = harness.get("//foo.org/bar").await.unwrap();
assert_eq!(res.status(), StatusCode::MOVED_PERMANENTLY);

let url = res.headers().get(header::LOCATION).unwrap();
// TODO: The request path is apparently parsed differently on Windows, but at least the
// resulting redirect is still safe, and that's the important part.
if cfg!(target_os = "windows") {
assert_eq!(url, "/");
} else {
assert_eq!(url, "/foo.org/bar/");
}
}

#[tokio::test]
@@ -230,7 +264,7 @@ async fn last_modified_is_gmt() {
let mut file_path = harness.dir.path().to_path_buf();
file_path.push("file1.html");
let status = Command::new("touch")
.args(&["-t", "198510260122.00"])
.args(["-t", "198510260122.00"])
.arg(file_path)
.env("TZ", "UTC")
.status()
@@ -254,7 +288,7 @@ async fn no_headers_for_invalid_mtime() {
let mut file_path = harness.dir.path().to_path_buf();
file_path.push("file1.html");
let status = Command::new("touch")
.args(&["-t", "197001010000.01"])
.args(["-t", "197001010000.01"])
.arg(file_path)
.env("TZ", "UTC")
.status()
@@ -407,6 +441,75 @@ async fn serves_requested_range_not_satisfiable_when_at_end() {
assert_eq!(res.status(), hyper::StatusCode::RANGE_NOT_SATISFIABLE);
}

#[tokio::test]
async fn serves_gzip() {
let harness = Harness::new(vec![
("file1.html", "this is file1"),
("file1.html.gz", "fake gzip compression"),
]);
let req = Request::builder()
.uri("/file1.html")
.header(header::ACCEPT_ENCODING, "gzip")
.body(())
.expect("unable to build request");

let res = harness.request(req).await.unwrap();
assert_eq!(
res.headers().get(header::CONTENT_ENCODING),
Some(&Encoding::Gzip.to_header_value())
);
assert_eq!(read_body(res).await, "fake gzip compression");
}

#[tokio::test]
async fn serves_br() {
let harness = Harness::new(vec![
("file1.html", "this is file1"),
("file1.html.br", "fake brotli compression"),
("file1.html.gz", "fake gzip compression"),
]);
let req = Request::builder()
.uri("/file1.html")
.header(header::ACCEPT_ENCODING, "br;q=1.0, gzip;q=0.5")
.body(())
.expect("unable to build request");

let res = harness.request(req).await.unwrap();
assert_eq!(
res.headers().get(header::CONTENT_ENCODING),
Some(&Encoding::Br.to_header_value())
);
assert_eq!(read_body(res).await, "fake brotli compression");
}

#[tokio::test]
async fn test_memory_fs() {
let dir = Harness::create_temp_dir(vec![
("index.html", "root index"),
("nested/index.html", "nested index"),
]);
let fs = MemoryFs::from_dir(dir.path())
.await
.expect("MemoryFs failed");
dir.close().expect("tempdir cleanup failed");

let static_ = Static::with_opener(fs);

let req = Request::builder()
.uri("/")
.body(())
.expect("unable to build request");
let res = static_.clone().serve(req).await.unwrap();
assert_eq!(read_body(res).await, "root index");

let req = Request::builder()
.uri("/nested/")
.body(())
.expect("unable to build request");
let res = static_.serve(req).await.unwrap();
assert_eq!(read_body(res).await, "nested index");
}

#[cfg(target_os = "windows")]
#[tokio::test]
async fn ignore_windows_drive_letter() {