Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tesla Pipelining Standarization #525

Open
yordis opened this issue Mar 26, 2022 · 3 comments
Open

Tesla Pipelining Standarization #525

yordis opened this issue Mar 26, 2022 · 3 comments

Comments

@yordis
Copy link
Member

yordis commented Mar 26, 2022

Upon request from our conversation in Slack @teamon

Context

As of today, they are multiple packages that have a similar architecture of having some Pipelining layer and Routing of messages, for example:

  • Tesla
  • Plug
  • Commanded
  • Goth

Most have a different way of registering the routing and the pipelining.

Problems

worth saying we can't do much about Commanded or any other package, but we can start with Tesla and figure those things out as we go. Please focus on what Telsa can do. I mentioned it to make a point.

  • They all have different APIs, so it may be an opportunity to be closer to Plug (since it is the most popular) across the packages.

  • Is there any opportunity in the Elixir ecosystem to solve this problem by having another layer of indirections? What is so different between them? Why are they so different? Can they all have the same API and behave quite similarly?

Expected Outcome

  • Alignment in the ecosystem.
  • Easier for newbies to get involved in the ecosystem.
  • Shared indirection that hopefully helps future package development or can improve at one level of indirection, benefiting the next layer.
@yordis
Copy link
Member Author

yordis commented Mar 26, 2022

@slashdotdash I hope you don't mind the ping since you developed something similar for Commanded. The intention is to find out the level of indirections and seek alignment if possible.

@teamon
Copy link
Member

teamon commented May 8, 2022

Just FYI - I did some PoC some time ago how could Tesla look like if it was based on Plug.Conn (and reusing as much of Plug as possible) and the results were not very good - Plug is focused on server side (duh) and Tesla on client side and they do not really fit well together.

Totally untested 🐉-inside random piece of code below:

# lib/tesla.ex
defmodule Tesla do
  def new(opts) do
    %Plug.Conn{adapter: opts[:adapter], path_info: [], request_path: "", owner: self()}
  end

  def get(conn, path) do
    request(conn, "GET", path)
  end

  def request(conn, method, path) do
    %{conn | method: method}
    |> append_path(path)
    |> execute()
  end

  def execute(conn) do
    {mod, opts} = conn.adapter

    with {:ok, conn} <- mod.call(conn, opts) do
      run_after_receive(conn)
    end
  end

  def url(conn) do
    %URI{
      path: conn.request_path,
      host: conn.host,
      port: conn.port,
      query: conn.query_string,
      scheme: to_string(conn.scheme)
    }
    |> URI.to_string()
  end

  def body(conn) do
    conn.private[:req_body] || ''
  end

  def register_after_receive(conn, callback) when is_function(callback, 1) do
    update_in(conn.private[:after_receive], &[callback | &1 || []])
  end

  defp run_after_receive(%{private: private} = conn) do
    Enum.reduce(private[:after_receive] || [], conn, & &1.(&2))
  end

  def append_path(conn, uri) do
    %URI{path: path, host: host, port: port, query: qs, scheme: scheme} = URI.new!(uri)

    path_info = conn.path_info ++ split_path(path)
    request_path = Enum.join([nil | path_info], "/")

    %{
      conn
      | host: host || conn.host,
        path_info: path_info,
        port: port || conn.port,
        query_string: qs || conn.query_string || "",
        request_path: request_path,
        scheme: if(scheme, do: String.to_atom(scheme), else: conn.scheme)
    }
  end

  defp split_path(path) do
    segments = :binary.split(path, "/", [:global])
    for segment <- segments, segment != "", do: segment
  end
end

defmodule Tesla.Middleware.BaseUrl do
  def call(conn, base) do
    Tesla.append_path(conn, base)
  end
end

defmodule Tesla.Middleware.BearerAuth do
  def call(conn, token) do
    conn
    |> Plug.Conn.put_req_header("authentication", "Bearer #{token}")
  end
end

defmodule Tesla.Middleware.JSON do
  @opts Plug.Parsers.init(parsers: [:json], json_decoder: Jason)

  def call(conn, _opts) do
    Tesla.register_after_receive(conn, fn conn ->
      conn =
        conn
        |> swap_headers()
        |> swap_method("POST")

      IO.inspect(List.keyfind(conn.req_headers, "content-type", 0))

      conn =
        conn
        |> Plug.Parsers.call(@opts)

      conn
      |> swap_headers()
      |> swap_method(conn.method)
    end)
  end

  defp swap_headers(conn) do
    %{conn | req_headers: conn.resp_headers, resp_headers: conn.req_headers}
  end

  defp swap_method(conn, method) do
    %{conn | method: method}
  end
end

defmodule Tesla.Adapter.Hackney do
  import Tesla, only: [url: 1, body: 1]

  def call(conn, opts) do
    case request(conn.method, url(conn), conn.req_headers, body(conn), opts) do
      {:ok, status, headers, body} ->
        {:ok,
         %{
           conn
           | status: status,
             resp_headers: format_headers(headers),
             adapter: {__MODULE__, body}
         }}
    end
  end

  def read_req_body(ref, _opts) do
    with {:ok, body} <- :hackney.body(ref) do
      {:ok, body, ref}
    end
  end

  defp request(method, url, headers, body, opts) do
    handle(:hackney.request(method, url, headers, body, opts))
  end

  defp handle({:ok, status, headers, ref}) when is_reference(ref) do
    {:ok, status, headers, ref}
  end

  defp format_headers(headers) do
    for {key, value} <- headers do
      {String.downcase(to_string(key)), to_string(value)}
    end
  end

  defp format_body(data) when is_list(data), do: IO.iodata_to_binary(data)
  defp format_body(data) when is_binary(data) or is_reference(data), do: data
end


# test/tesla_test.exs
defmodule TeslaTest do
  use ExUnit.Case

  # @url "http://localhost:#{Application.get_env(:httparrot, :http_port)}"

  describe "DSL" do
    defmodule ClientA do
      def new(token) do
        Tesla.new(adapter: {Tesla.Adapter.Hackney, []})
        |> Tesla.Middleware.BaseUrl.call("https://httpbin.org")
        |> Tesla.Middleware.BearerAuth.call("my-token")
        |> Tesla.Middleware.JSON.call([])
      end

      def json(client) do
        Tesla.get(client, "/json")
      end
    end

    test "it works" do
      client = ClientA.new("mytoken")

      response =
        ClientA.json(client)
        |> IO.inspect()
    end
  end
end

@yordis
Copy link
Member Author

yordis commented May 8, 2022

I was more focused on making the underline mechanism the same, but higher up, they would end up with different implementations to some extent.

For example, these would be where they all diverge:

  • Tesla.Env
  • Plug.Conn
  • Commanded.Middleware.Pipeline
  • Goth.HTTPClient just a map

The message/data structure passed between the middleware differs, but most need the exact macros to compose those pipelining, halting, and whatnot.

So, would it be prudent to make step 1 have something quite similar in terms of API design and follow up on how to continue sharing more code?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Idea
Development

No branches or pull requests

2 participants