codice:
From the C FAQ:
19.37:  How can I implement a delay, or time a user's response, with sub-
        second resolution?

A:      Unfortunately, there is no portable way.  V7 Unix, and derived
        systems, provided a fairly useful ftime() routine with
        resolution up to a millisecond, but it has disappeared from
        System V and POSIX.  Other routines you might look for on your
        system include clock(), delay(), gettimeofday(), msleep(),
        nap(), napms(), nanosleep(), setitimer(), sleep(), times(), and
        usleep().  (A routine called wait(), however, is at least under
        Unix *not* what you want.)  The select() and poll() calls (if
        available) can be pressed into service to implement simple
        delays.  On MS-DOS machines, it is possible to reprogram the
        system timer and timer interrupts.

        Of these, only clock() is part of the ANSI Standard.  The
        difference between two calls to clock() gives elapsed execution
        time, and if CLOCKS_PER_SEC is greater than 1, the difference will
        have subsecond resolution.  However, clock() gives elapsed
        processor time used by the current program, which on a
        multitasking system may differ considerably from real time.

        If you're trying to implement a delay and all you have available
        is a time-reporting function, you can implement a CPU-intensive
        busy-wait, but this is only an option on a single-user, single-
        tasking machine as it is terribly antisocial to any other
        processes.  Under a multitasking operating system, be sure to
        use a call which puts your process to sleep for the duration,
        such as sleep() or select(), or pause() in conjunction with
        alarm() or setitimer().

        For really brief delays, it's tempting to use a do-nothing loop
        like

                long int i;
                for(i = 0; i < 1000000; i++)
                        ;

        but resist this temptation if at all possible!  For one thing,
        your carefully-calculated delay loops will stop working next
        month when a faster processor comes out.  Perhaps worse, a
        clever compiler may notice that the loop does nothing and
        optimize it away completely.

        References: H&S Sec. 18.1 pp. 398-9; PCS Sec. 12 pp. 197-8,215-
        6; POSIX Sec. 4.5.2.
googlando ho trovato questo..
che conferma quel che dice pprllo