Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: eza is substantially slower than ls in large directories #922

Open
apcamargo opened this issue Apr 7, 2024 · 11 comments
Open

feat: eza is substantially slower than ls in large directories #922

apcamargo opened this issue Apr 7, 2024 · 11 comments

Comments

@apcamargo
Copy link

As discussed here, eza is significantly slower than ls when listing files in large directories. This issue persists with the latest version (0.18.9).

Here's a benchmark for a directory with ~50k files:

Benchmark 1: ls -1 dir | wc -l
  Time (mean ± σ):     807.9 ms ± 219.8 ms    [User: 102.2 ms, System: 20.4 ms]
  Range (min … max):   597.6 ms … 1036.0 ms    3 runs

Benchmark 2: eza -1 dir | wc -l
  Time (mean ± σ):      3.541 s ±  0.422 s    [User: 0.203 s, System: 0.654 s]
  Range (min … max):    3.074 s …  3.895 s    3 runs

Summary
  ls -1 dir | wc -l ran
    4.38 ± 1.30 times faster than eza -1 UHGV | wc -l
@planet36
Copy link
Contributor

In this comment lsd-rs/lsd#378 (comment) I did some benchmarking that compared the performance of exa, lsd, ls, and uu-ls.

I'll update it to use eza instead of exa, and increase the files from 10,000 to 50,000. And I might remove lsd because it's too slow.

@planet36
Copy link
Contributor

I created a directory in /tmp with 50,000 empty files. On my Linux system, /tmp is a tmpfs, and bash was the shell I used.

mkdir /tmp/5E4
cd /tmp/5E4
touch $(seq 5E4)

Then I compared the execution time of eza, lsd, ls, and uu-ls on that directory.

The options were intended to disable features that differ between programs (such as icons and colors) but still display single line or long" output.

Single line output ("-1")

printf '\n\neza'   ; time LC_ALL=C /usr/bin/eza   --color=never --sort=none -1 --icons=never > /dev/null
printf '\n\nlsd'   ; time LC_ALL=C /usr/bin/lsd   --color=never --sort=none -1 --icon=never --ignore-config > /dev/null
printf '\n\nls'    ; time LC_ALL=C /usr/bin/ls    --color=never --sort=none -1 > /dev/null
printf '\n\nuu-ls' ; time LC_ALL=C /usr/bin/uu-ls --color=never --sort=none -1 > /dev/null
eza
real    0m0.053s
user    0m0.022s
sys     0m0.031s


lsd
real    0m25.304s
user    0m1.751s
sys     0m3.063s


ls
real    0m0.008s
user    0m0.004s
sys     0m0.004s


uu-ls
real    0m0.020s
user    0m0.010s
sys     0m0.010s

Long output ("-l")

printf '\n\neza'   ; time LC_ALL=C /usr/bin/eza   --color=never --sort=none -l --icons=never > /dev/null
printf '\n\nlsd'   ; time LC_ALL=C /usr/bin/lsd   --color=never --sort=none -l --icon=never --ignore-config > /dev/null
printf '\n\nls'    ; time LC_ALL=C /usr/bin/ls    --color=never --sort=none -l > /dev/null
printf '\n\nuu-ls' ; time LC_ALL=C /usr/bin/uu-ls --color=never --sort=none -l > /dev/null
eza
real    0m0.118s
user    0m0.093s
sys     0m0.126s


lsd
real    0m25.326s
user    0m1.668s
sys     0m3.234s


ls
real    0m0.059s
user    0m0.020s
sys     0m0.039s


uu-ls
real    0m0.114s
user    0m0.057s
sys     0m0.053s

Next I logged the system calls made by each program.

LC_ALL=C strace /usr/bin/eza   --color=never --sort=none -l --icons=never > /dev/null 2> ~/strace.eza.txt
LC_ALL=C strace /usr/bin/lsd   --color=never --sort=none -l --icon=never --ignore-config > /dev/null 2> ~/strace.lsd.txt
LC_ALL=C strace /usr/bin/ls    --color=never --sort=none -l > /dev/null 2> ~/strace.ls.txt
LC_ALL=C strace /usr/bin/uu-ls --color=never --sort=none -l > /dev/null 2> ~/strace.uu-ls.txt
cd
wc --lines --total=never strace.*.txt
   100452 strace.eza.txt
 11556158 strace.lsd.txt
   100947 strace.ls.txt
   150734 strace.uu-ls.txt

You can see that lsd had many more system calls than the others. It's strace output was about 883M, so keep that in mind if you test on a directory with more files.

Here's a count of how many times each system call was made for each program.

printf '\n\neza\n'   ; sed -n -E -e 's/\(.*//gp' strace.eza.txt   | sort | uniq -c
printf '\n\nlsd\n'   ; sed -n -E -e 's/\(.*//gp' strace.lsd.txt   | sort | uniq -c
printf '\n\nls\n'    ; sed -n -E -e 's/\(.*//gp' strace.ls.txt    | sort | uniq -c
printf '\n\nuu-ls\n' ; sed -n -E -e 's/\(.*//gp' strace.uu-ls.txt | sort | uniq -c
eza
      1 access
      1 arch_prctl
    121 brk
      8 clone3
     22 close
      1 execve
      1 exit_group
     16 fstat
      4 futex
     50 getdents64
      2 getrandom
      4 ioctl
      6 lseek
     60 mmap
     24 mprotect
     10 mremap
      5 munmap
     24 openat
      1 poll
      2 pread64
      2 prlimit64
     46 read
      1 rseq
      7 rt_sigaction
     17 rt_sigprocmask
      2 sched_getaffinity
      1 set_robust_list
      1 set_tid_address
      3 sigaltstack
  50008 statx
  50000 write


lsd
      1 access
      1 arch_prctl
    308 brk
2850070 close
 100002 connect
 100002 epoll_create1
 600012 epoll_ctl
 400008 epoll_pwait2
  27559 epoll_wait
      1 execve
      1 exit_group
1550043 fstat
      3 futex
 200052 getdents64
      1 getpid
      4 getrandom
      5 ioctl
 150003 lgetxattr
 150004 lseek
     34 mmap
      9 mprotect
     18 mremap
      7 munmap
 150004 newfstatat
3300079 openat
      1 poll
      6 prctl
      2 pread64
      2 prlimit64
 150019 read
  50001 readlink
 127561 recvfrom
      1 rseq
      5 rt_sigaction
 200004 rt_sigprocmask
      1 sched_getaffinity
 100002 sendto
      1 set_robust_list
      1 set_tid_address
      3 sigaltstack
 100002 socket
1050022 statx
 100002 timerfd_create
 100289 timerfd_settime
      1 write


ls
      1 access
      1 arch_prctl
     15 brk
     70 close
      2 connect
      2 epoll_create1
     12 epoll_ctl
      8 epoll_pwait2
      2 epoll_wait
      1 execve
      1 exit_group
     44 fstat
      3 futex
     52 getdents64
      1 getpid
      3 getrandom
      2 ioctl
  50000 listxattr
      4 lseek
     28 mmap
      7 mprotect
      6 mremap
      3 munmap
      4 newfstatat
     77 openat
      6 prctl
      2 pread64
      1 prlimit64
     14 read
      4 recvfrom
      1 rseq
      4 rt_sigprocmask
      2 sendto
      1 set_robust_list
      1 set_tid_address
      2 socket
  50020 statx
      2 timerfd_create
      2 timerfd_settime
    535 write


uu-ls
      1 access
      1 arch_prctl
     40 brk
     70 close
      2 connect
      2 epoll_create1
     12 epoll_ctl
      8 epoll_pwait2
      1 execve
      1 exit_group
     43 fstat
      3 futex
     52 getdents64
      1 getpid
      4 getrandom
      6 ioctl
 100000 llistxattr
      4 lseek
     32 mmap
      9 mprotect
      6 mremap
      5 munmap
      5 newfstatat
     79 openat
      1 poll
      6 prctl
      2 pread64
      2 prlimit64
     19 read
      2 recvfrom
      1 rseq
      5 rt_sigaction
      4 rt_sigprocmask
      1 sched_getaffinity
      2 sendto
      1 set_robust_list
      1 set_tid_address
      3 sigaltstack
      2 socket
  50022 statx
      2 timerfd_create
      2 timerfd_settime
    268 write

I used a spreadsheet to put the strace results in a table for easier comparison.

System call eza lsd ls uu-ls
access 1 1 1 1
arch_prctl 1 1 1 1
brk 121 308 15 40
clone3 8
close 22 2850070 70 70
connect 100002 2 2
epoll_create1 100002 2 2
epoll_ctl 600012 12 12
epoll_pwait2 400008 8 8
epoll_wait 27559 2
execve 1 1 1 1
exit_group 1 1 1 1
fstat 16 1550043 44 43
futex 4 3 3 3
getdents64 50 200052 52 52
getpid 1 1 1
getrandom 2 4 3 4
ioctl 4 5 2 6
lgetxattr 150003
listxattr 50000 100000
lseek 6 150004 4 4
mmap 60 34 28 32
mprotect 24 9 7 9
mremap 10 18 6 6
munmap 5 7 3 5
newfstatat 150004 4 5
openat 24 3300079 77 79
poll 1 1 1
prctl 6 6 6
pread64 2 2 2 2
prlimit64 2 2 1 2
read 46 150019 14 19
readlink 50001
recvfrom 127561 4 2
rseq 1 1 1 1
rt_sigaction 7 5 5
rt_sigprocmask 17 200004 4 4
sched_getaffinity 2 1 1
socket 100002 2 2
set_robust_list 1 1 1 1
set_tid_address 1 1 1 1
sigaltstack 3 3 3
socket 100002 2 2
statx 50008 1050022 50020 50022
timerfd_create 100002 2 2
timerfd_settime 100289 2 2
write 50000 1 535 268

The only one that stands out to me for eza is the number of calls to write.

@daviessm
Copy link
Contributor

The only one that stands out to me for eza is the number of calls to write.

Yeah I'm curious about that as well. Writing to stdout perhaps (i.e. no buffering)? we'd be really grateful if you have time to figure out where that's coming from (especially if it significantly affects the runtime). I think @tertsdiepraam was looking into this as well a while ago.

@tertsdiepraam
Copy link
Contributor

tertsdiepraam commented Apr 11, 2024

This is what I was looking into: #833

What stood out to me was the statx calls, because linux already gives a bunch of information about the files in each directory. I'll have to look into this again

Stdout is already line buffered so I don't think writing will be super expensive, but could still be optimized of course.

@planet36
Copy link
Contributor

The system call results are from using the long output (-l) option.
I'll run it again tomorrow with the single line (-1) option.

@tertsdiepraam
Copy link
Contributor

Ah that explains my confusion, because uutils should do way fewer statx calls with -1. My PR should still be a step in the right direction.

@planet36
Copy link
Contributor

Here are the results from running with the single line (-1) option.

LC_ALL=C strace -o ~/strace.eza.txt   /usr/bin/eza   --classify=never --color=never --sort=none -1 --icons=never > /dev/null
LC_ALL=C strace -o ~/strace.ls.txt    /usr/bin/ls    --classify=never --color=never --sort=none -1 > /dev/null
LC_ALL=C strace -o ~/strace.uu-ls.txt /usr/bin/uu-ls --classify=never --color=never --sort=none -1 > /dev/null
cd
wc --lines --total=never strace.*.txt
  100252 strace.eza.txt
     176 strace.ls.txt
     242 strace.uu-ls.txt

Here's a count of how many times each system call was made for each program.

printf '\n\neza\n'   ; sed -n -E -e 's/\(.*//gp' strace.eza.txt   | sort | uniq -c
printf '\n\nls\n'    ; sed -n -E -e 's/\(.*//gp' strace.ls.txt    | sort | uniq -c
printf '\n\nuu-ls\n' ; sed -n -E -e 's/\(.*//gp' strace.uu-ls.txt | sort | uniq -c
eza
      1 access
      1 arch_prctl
     39 brk
     13 close
      1 execve
      1 exit_group
     13 fstat
     48 getdents64
      1 getrandom
      4 ioctl
     52 mmap
     13 mprotect
     10 mremap
      5 munmap
     13 openat
      1 poll
      2 pread64
      2 prlimit64
     17 read
      1 rseq
      6 rt_sigaction
      1 sched_getaffinity
      1 set_robust_list
      1 set_tid_address
      3 sigaltstack
  50001 statx
  50000 write


ls
      1 access
      1 arch_prctl
      3 brk
      6 close
      1 execve
      1 exit_group
      5 fstat
     48 getdents64
      1 getrandom
      2 ioctl
     12 mmap
      4 mprotect
      1 munmap
      4 openat
      6 prctl
      2 pread64
      1 prlimit64
      2 read
      1 rseq
      1 set_robust_list
      1 set_tid_address
     71 write


uu-ls
      1 access
      1 arch_prctl
     50 brk
      7 close
      1 execve
      1 exit_group
      7 fstat
     48 getdents64
      2 getrandom
      6 ioctl
     23 mmap
      7 mprotect
     10 mremap
      4 munmap
      1 newfstatat
      7 openat
      1 poll
      2 pread64
      2 prlimit64
      8 read
      1 rseq
      5 rt_sigaction
      1 sched_getaffinity
      1 set_robust_list
      1 set_tid_address
      3 sigaltstack
      2 statx
     38 write
System call eza ls uu-ls
access 1 1 1
arch_prctl 1 1 1
brk 39 3 50
close 13 6 7
execve 1 1 1
exit_group 1 1 1
fstat 13 5 7
getdents64 48 48 48
getrandom 1 1 2
ioctl 4 2 6
mmap 52 12 23
mprotect 13 4 7
mremap 10 10
munmap 5 1 4
newfstatat 1
openat 13 4 7
poll 1 1
prctl 6
pread64 2 2 2
prlimit64 2 1 2
read 17 2 8
rseq 1 1 1
rt_sigaction 6 5
sched_getaffinity 1 1
set_robust_list 1 1 1
set_tid_address 1 1 1
sigaltstack 3 3
statx 50001 2
write 50000 71 38

Notice the statx and write calls.

@adamnemecek
Copy link

You should consider adding an option that limits the number of lines printed, maybe --limit n.

@tertsdiepraam
Copy link
Contributor

Not a bad idea, even as just a nice feature to replace eza -l | head, which removes all styling. However, in terms of performance, it's just a workaround, because the difference between eza and the other utils show that eza could do better. The write calls should be easy to improve. The statx are necessary to determine whether a file is executable, so it's more of a design decision for the maintainers. It can still be improved easily in cases where the color is turned off though. If there's demand for that I can pick up my PR again.

@planet36
Copy link
Contributor

.... The statx are necessary to determine whether a file is executable, so it's more of a design decision for the maintainers. It can still be improved easily in cases where the color is turned off though. If there's demand for that I can pick up my PR again.

Also, it might not be necessary for the case when classify and icons are turned off.

@tertsdiepraam
Copy link
Contributor

Indeed, that's what my PR does. It puts the metadata behind a OnceLock, so that it's retrieved lazily and at most once. Then it's just making sure that no part of the code does another metadata call somewhere.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants