Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Practical-homeworks: followup from forum discussion #9

Open
MalloZup opened this issue Oct 8, 2018 · 5 comments
Open

Practical-homeworks: followup from forum discussion #9

MalloZup opened this issue Oct 8, 2018 · 5 comments
Assignees
Labels
enhancement New feature or request

Comments

@MalloZup
Copy link

MalloZup commented Oct 8, 2018

Hi i just reopen this issue after our discussion in the forum.

The main goal would be to have practical/pragmatic homework and letting people knowing existining library in elixir and

For example:

This are the first i have in mind: ( the exercises could be even modular, like 1.0 , 1. Module B)

*1)
Write a program in elixir that fetch Github PullRequest and print the Pr title and Author ( 1.0 exercise)
1B) expand this program to save the data into redis key/value or other DB.

*2)
Write a program using a web-driver framework which will test the elixirforum webpage for specifc content.

  1. Write a irc bot etc.
@MalloZup
Copy link
Author

MalloZup commented Oct 8, 2018

IMHO the best solution it might that we could review this exercises.

So the learning people would make a PR and other maintainers review them. After he would review also.

what do you think? tia

@notactuallypagemcconnell
Copy link

notactuallypagemcconnell commented Oct 8, 2018

@MalloZup RE comment number one I have had similar thoughts. I think it would be better to keep them mostly elixir-specific and not necessarily rely on third party services. We have ETS, so I would stray away from Redis or another DB when the idea is to enhance Elixir skills.

RE comment two I'm not sure what you mean. Could you elaborate?

This is an example exercise I had considered proposing. It is how we have the homework stuff so far set up with fizzbuzz etc as we have now where you start with a base but have to expand it and (hopefully) keep writing tests.

Question:

Design a GenServer that on initialization takes a given mod (module), func (function), args (list of arguments), and interval (an interval in nanoseconds) and will run the given function on the provided module with the provided arguments in the given interval.
hint: Check the docs for GenServer with h GenServer and peek at the examples

Start:

    defmodule MyApp.Periodically do
      use GenServer

      def start_link do
        GenServer.start_link(__MODULE__, %{})
      end

      @impl true
      def init(state) do
        schedule_work() # Schedule work to be performed on start
        {:ok, state}
      end

      @impl true
      def handle_info(:work, state) do
        # Do the desired work here
        schedule_work() # Reschedule once more
        {:noreply, state}
      end

      defp schedule_work() do
        Process.send_after(self(), :work, 2 * 60 * 60 * 1000) # In 2 hours
      end
    end

And maybe we could even have a 'solution' branch which would have something like:

defmodule PeriodicExample do
  use GenServer

  @moduledoc """
  Run a given module's function with the given arguments in the given interval
  PeriodicExample.start_link({IO, :puts, ["foo"], 1000})
  module is passed as the module itself, the function as an atom, the args as a list and interval an integer
  """
  def start_link({mod, func, args, interval} = state)
    when is_atom(mod) and is_atom(func) and is_list(args) and is_integer(interval) do
    GenServer.start_link(__MODULE__, state)
  end

  @impl true
  def init(state) do
    schedule_work(state)
    {:ok, state}
  end

  @impl true
  def handle_info(:work, {mod, func, args, _interval} = state) do
    apply(mod, func, args)
    schedule_work(state)
    {:noreply, state}
  end

  defp schedule_work({_mod, _func, _args, interval}) do
    Process.send_after(self(), :work, interval)
  end
end

This would preferably include a test suite too but I didnt include that here for simplicity's sake expressing the idea.

@notactuallypagemcconnell

PS could you link to the forum discussion? I dont frequent there so I missed it but would love to catch up @MalloZup

@MalloZup
Copy link
Author

MalloZup commented Oct 9, 2018

@notactuallypagemcconnell hi! thx for answer!

The forum link was here:
https://elixirforum.com/t/elixir-practixism-proposal/17115/3

@notactuallypagemcconnell the example you posted i like it actually ! 👍

The examples i posted 3 of them i elaborate them in same nanoseconds... There were more POC.

Yop, i kind agree that we might exclude 3rd Party Lib as much as possible in a sense of free learning.

If you look at also the elixir forum post, i was talking about practisism in a sense also for giving feedback to people.

What i like also in opensource world is feedback from people. So i was thinking in a more challange way for doing the exercises..

I took years ago the linux kernel challange, where i learned lot of low level in Linux Kernel and i had fun.

In this challange were also the feedback part. Paches send via email , reviewed from maintainers.

So summarizing, the things i was discussing were basic 2:

  • more pragmatic/concept exercises: ( DB, Networking, Low-Level), keep the algorithm barrier as a figurant of the challange, not the main actor. ( in the real world is not always the algorithm but the design, the perspective etc making diff)
  • having feedback/review on the exercises ( interact with maintainers if possible)

@pdgonzalez872
Copy link

@MalloZup I have a question about scaling this type of effort. I also think there is new data that may help guide this discussion.

This new data is the fact that Exercism changed the flow around how the student progresses through the exercises. They (last time I checked) started to require mentors to review and approve exercise submissions. I recently talked to someone in a University that said that they no longer send students over to Exercism due to how slow the churn to grade the exercises is. This touches an important point about reviewing the submissions: how can that be scaled? Maintainer reviews sound excellent in theory (and would be the preferred way), but I'm afraid they don't work that well in practice.

Unless there is some type of incentive we can provide the reviewers (unsure what at this time, maybe monetary?, maybe publicity?) I think it will be difficult to keep up with the work. Getting the incentives right is also extremely hard (Cobra effect).

One idea I had the other day was to leverage SourceLevel for these type of labor intensive review problems. I think they have a free tier, or something for open source repos that would be cool to use. I am not sure, but the founder is awesome and it wouldn't hurt to ask.

(I am/was a mentor at Exercism for Elixir... I did it for a month or so back in the day and haven't been back... The never ending nature of the work makes you not want to take on the task after a while.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

7 participants