Benchmarking on the glob() “dinosaur” and readdir() PHP functions

Well..it’s true. The PHP glob()  function is a big dinosaur, memory eater and speed blow upper (if that can be said). So let’s test the glob()  function against some other alternatives.

I have recently had to detect duplicated/missing files comparing two directories in a set of 80.000 files and I used the PHP glob() function. 80k is a reasonable number for a local machine. But later I had to deal with 800k and that is really a lot if you don’t do things the right way. So I had to try some other alternatives.

I had to do some benchmarking in the area and I used a set of 25.000 images and another folder with 10.000 random duplicates. I think this is a reasonable number to see any differences in the final result.

Here are my approaches in my intent of testing the duplicate file detection

So for first test I runned the function as it is. You can see the entire function n my previous article:

php glob function testing

Processing took 6.76 seconds and 5 mb of memory.

Second I used the GLOB_NOSORT . This can make a significant improvement if you do not care about the file order. GLOB_NOSORT param returns files as they appear in the directory which means no sorting. When this flag is not used, the path names are sorted alphabetically

php glob function testing no sort

Processing took 6.37 seconds 5 mb of memory.

I have done this tests using the function I described in the previous article and the concept is quite simple. Buid an array of all files in folder dir1 and see if the files exists in dir2. Note that time may vary depending of the CPU usage at the moment of the test an that it is exponential depending of the files quantity. You can take a look at the function here. I had commented out tbe part that renames or delete the files and felt just the on-screen messaging.

Of course I can always search dir2 instead of dir1 . It is a good point if you know what folder will have less files, you can choose to start from there. It is very logical that if the folder will have less file, the function will spend less time to build the array in memory. But this is not of my interest right now in the process that I am doing, since dir2 will have more and more images as opposite as dir1 .

So for now I don’t think there is much I can do. glob()  function builds an array with all files that are in the specified folder and it has to do it no matter what.

It could be a lot faster if it didn’t try to build the array and just check for the duplicated files on the fly. This could be a good approach, so let’s look in the PHP arsenal and see what else we can find.

How to use readdir() to search for duplicate files in distinct folders

The PHP readdir()  function returns the name of the next entry in a given directory. The entries are returned in the order in which they are stored by the filesystem. This seems to be very similar to glob() , but it doesn’t store the filenames in memory as an array.

So let’s build a little function using readdir()  to search through the folders for duplicated files.

And let’s do some benchmark on it 😀

readdir-benchmark-test

Waw…god damned…this is awesome. If you blink you won’t see it. This is f*** real speed. But hey, this only prints the files, and not comparing them to come up with the duplicated ones. Well, let’s add the fancy stuff from our first function and see how it does. Hands on work:

And here it is. Nice and clean. Let’s do some benchmark on it and hope for the best.

readdir-benchmark-testing

2.72 seconds …well how about that? I saved more than half of the time just using readdir() function. That’s more like it. And take a look at the memory usage! It dropped down from 5MB to 256Kb, isn’t that great or what? Try to apply this algorithm to 800k instead ok 25k of files…This is really a significant improvement.

At this point I think there is no need to do further testing on this 2 functions although I agree that depending on the case, there is not possible to user readdir() and that glob() could be your only approach.

Conclusion: readdir() … you rock!!

We have speed up the process a lot but I would like to try some other things and see if there are ways to make the code a little bit faster. After a little more testing and after introducing some break points I have noticed that the slower part, is the one that checking  if  the  file_exists in the second folder and it eats up more than 2 seconds. This is like 90% of time is used here. How can we speed this up?

How to speed up the PHP file_exists() function …dead-end?

This is quite a rough road. There are lots of people saying that there is not much you can do about it. But some say that thinks speed up if you use is_file()  instead of file_exists() . Well maybe on millions of files or on different folder structure. I’ve seen no change on my 25k set of JPEG images.

I have also tried using clearstatcache()  with no better result in my case.

But then I found this so called stream_resolve_include_path() . In the PHP manual they say it resolve the filename against the include path according to the same rules as fopen()/include . The value returned by stream_resolve_include_path() is a string containing the resolved absolute filename, or FALSE on failure.

Could  stream_resolve_include_path() function , be the holy grail?

So everything should work just fine. In case it finds the file, will return a string. A string is always evaluated to TRUE  by the if statement, so we’re good on this. An in case the file is not found will return FALSE  witch is OK.

Let’s do some benchmark testing with stream_resolve_include_path() and see if we can replace file_exists()  with it.

And here is the result of testing.

stream_resolve_include_path_benchmark

I think there is really no need to tell you that this is quite it for now. We have compared 25k of files against other 10k and all this in 1.35 seconds. This really is speed. Did I told you that images are not quite small? There are 2.61GB of images to compare against other 1.08GB.

1.35 seconds to compare more than 3GB of files is really a good time. It is 6 time faster than the gob() and file_exists() functions.

I have to say that I run this on a local machine, and it is more important to me to get the job done then get a lower loading time. So seconds are ok for me and I am not really interested in microseconds.

File check comparison benchmark testing – the final loop

OK, just for the record and to see some big data processed, I will do some benchmark only on this 3 PHP functions ( is_file(), file_exists() and stream_resolve_include_path() ) and test them on a 1 million loop. I will first test with an existent file and then with a file that doesn’t exists.

Here is my little function that does the testing on this:

And here are the results on this: It is really surprising. I think this speaks for itself

This explain why my previous test returned better results using  stream_resolve_include_path(). The part that I do not understand is why is_file()  did not make the code faster in the first place. For this test a used a really small image and then I repeat the test using a large image, but it seems that the file size does not influence the final result.

So what do you think? If you know about some other alternatives on this just let me know. I would really die to test them out.

One thought on “Benchmarking on the glob() “dinosaur” and readdir() PHP functions”

  1. why glob seems slower in this benchmark? because glob will do recursive into sub dir if you write like this “mydir/*”.

    just make sure there is no any sub dir to make glob fast.

    “mydir/*.jpg” is faster because glob will not try to get files inside sub dir.

Leave a Reply

Your email address will not be published. Required fields are marked *