📅 2015-Apr-24 ⬩ ✍️ Ashwin Nanjappa ⬩ 🏷️ glibc, rand, srand ⬩ 📚 Archive

I was using the simple `rand`

API provided by C for generating random numbers. I needed to generate `m`

sequences of `n`

numbers each. To my surprise, I found that two of these sequences were exactly the same!

I was using `srand`

to seed the generation of each sequence. In the release version of my code, I was seeding it using `time`

, which is a common practice:

`srand((unsigned int) time(nullptr));`

When I needed to debug the code, I needed the sequences to be reproducible, so I seeded them as:

```
// m sequences of n random numbers
for (int i = 0; i < m; ++i)
{
srand(i);
for (int j = 0; j < n; ++j)
// Use rand() here
}
```

On closer observation, the sequences produced by `srand(0)`

and `srand(1)`

were exactly same!

This was surprising behavior. The `srand`

man page had no information about this. It only warned that if no seed was provided, `1`

would be assumed as seed:

`If no seed value is provided, the rand() function is automatically seeded with a value of 1.`

The problem was found by looking in the source code of **GNU Libc**. In the implementation of `srand`

in the file `stdlib/random_r.c`

lies this gem:

```
/* We must make sure the seed is not 0. Take arbitrarily 1 in this case. */
if (seed == 0)
seed = 1;
```

You have got to love open source! So, seed of `0`

is internally reset to `1`

in the glibc implementation of `srand`

!

So, beware of the seed you pass to `srand`

. It might be wise to make sure that neither `0`

or `1`

is used as seed on Linux.

**Tried with:** GLibc 2.19.1 and Ubuntu 14.04