Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

In Console output, when finished -> write elapsed time #19

Open
wants to merge 2 commits into
base: develop
Choose a base branch
from

Conversation

timstarbuck
Copy link

Nice project! I added a counter and Interlocked.Increment/Decrement and a loop to wait for completion. Then I added an IOutput method to write a string (only implemented in Console output).

Seems you could also raise an event so the driving program knows its finished.

Let me know what you think!

linkcrawlerelapsed

@loneshark99
Copy link

loneshark99 commented Jul 21, 2016

@timstarbuck

How about this way of doing it, do you see any differences?

f4f3b35

You can pull from here.

https://github.com/loneshark99/LinkCrawler

@timstarbuck
Copy link
Author

I guess it depends on what we are trying to actually time :)

Showing the elapsed time of each request seems beneficial. I took a different meaning. I took "when finished" to mean when the site crawl has completed crawling all the links.

@loneshark99
Copy link

@timstarbuck true

@hmol
Copy link
Owner

hmol commented Jul 29, 2016

Hey, and thanks for contributing to my repository here 😄
I have tested your code and it works.
It may be because I'm not used to this type of code, but I get a feeling that this code is a bit "hacky"

while (counter > 0)
{
    Thread.Sleep(100);
}

And the problem is that I don't really know (right now) how to do it in another way, but I had something like this in mind: http://stackoverflow.com/a/25010220. I dont think we can use this here, because then we would need to have all the urls to crawl at start.
Without any research I have also thought about solving this by implementing some sort of queue that you would add urls to, and in the same time the program would get urls from the queue and crawl. What do you think?

@timstarbuck
Copy link
Author

Ha, yes. I felt it was a bit "hacky" as well, but as you mentioned, it works ;). Its a bit of a chicken and egg problem. You don't know how many links there are to crawl until they've all been crawled. I'll ponder your notes some and see if I can think of another way.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants