caching

  • Still not enough, we were forced to profile the java code and make some big changes.... (from part 1)

    Profiling

    You either profile an application for speed and/or memory usage and/or memory leaks. Our application is fast enough at the moment. Our major concern is optimizing memory usage and thus avoiding disk memory swapping.

    Some words about architecture


    It is not possible to profile any applications without having a deep understanding of the architecture behind. The Product Catalog is an innovative product which is a meta model for storing insurances product in a database, a Product is read only and  can derivate instance that we call Policies. Policies are users data holder, containing no text, just values, and sharing a similar structure as the Product. This let the product know everything about (cross) default values, (cross) validations,  multiple texts, attributes order/length/type etc... and thus separate definition (Products) from implementation (Policies). Products and Policies can be fully described  with Bricks, Attributes in a tree manner.

    Reduce the number of object created


    Looking at the code, we have seen that too many Products (17 Products has 15000 objects either Attributes/Bricks/Texts/Value/ValueRange) were loaded in memory. While this is clearly giving a speed advantage on an application server, it is simply killing the offline platform with it 1GB RAM (remember memory really free is 500Mb)
    The problem is that Attributes and Brick are using/can use  a lot of fields/meta data in the database which translate into simple java type (String for UUID, and meta data keys and values) in  memory. We start looking at the profiler and the 100 MB used by the product cache.
    Reducing this amount of object was the first priority, a lot of them are meta data which are common and spread across the Product Tree in memory. Since avoiding creation of unneeded object is always recommended, we decide to remove duplicate element in the tree by singularizing them. This is possible because the product is Read Only and made of identical meta data keys, meta keys value.

    Entropy and cardinality of meta data
    An Attribute may have an unlimited number of meta text (among other things), common meta data keys are  "long", and "short" and "help" text description in 4 languages (en_us, fr_ch, de_ch, it_ch), while this is not a problem in the database, this make the Product  object tree size quite huge  in the Product cache (containing Products Value Object). ..Counting some of them in database for example return stunning result. 
    We found 60000 "long" texts which translate into 60000 String text keys and 60000 String text values (worst case scenario since texts values may not be all reusable). Reducing this number of Objects is done quite easily by not creating new instance of  String, Decimal, Integer object and returning always the right and same Object instance. (we keep them in a MAP and return either a new instance or a previously cache one).

    Large objects cardinality  but a poor entropy
    By running two or three SQL statement and trying to distinguish real distinct values, we found that a lot of these meta data are made of a relative small number of different values. By just storing a limited number of String like "0", "1", "2" to... "99", "default", "long", "short", "de_ch", "fr_ch" we have reach a cache efficiency and reuse of object instance of 99%
    After that "small" change in the way value objects (VO) are created and connected, a java String object containing before "de_ch" and existing 10000 times in memory is now replaced across all  Attributes/Bricks by the same instance!

     The gain is simply phenomenal. Memory gain is bigger than 50%.

    Reducing the number of objects in memory 
    Instead of storing thousands of Products Text String in memory,  we decided to allocate them on disk using  java reflection API and a Dynamic Proxy.

    The idea is to save all String in one or more files on disk, the position of each text and length being saved in the corresponding Value Object. So basically we gain the space used by a String in memory  at the expense of a long (String position in file relative to start of file) and an  int (length of String) primitive type

    References:  Proxy  - InvocationHandler
    Resume: Java String disk based allocation
    Code snippet: soon

    Use better data structures
    Java has a lot of  quality library, commons collections from apache are well known. Javalution is a real time library with real time and reduce garbage collector impact. We have use FastTable and FastMap where it make sense.

    For example the class FastTablehas the following advantages over the widely used java.util.ArrayList:
    • No large array allocation (for large collections multi-dimensional arrays are employed). The garbage collector is not stressed with large chunk of memory to allocate (likely to trigger a full garbage collection due to memory fragmentation).
    • Support concurrent access/iteration without synchronization if the collection values are not removed/inserted

    Different caching strategy
    By design the ProductCatalog is able to use many caching strategy. One is named "Nocache" limit number of object in memory to the bare minimum, and redirect all access to product to database. In a mono user environment, and since products reside in 4 tables only (so only 4 select to read all data from DB and some VO to rebuild the tree are needed), the through output is more than enough.

    More to come...



    References
  • Resources such as JavaScript and CSS files can be compressed before being sent to the browser, improving network efficiencies and application load time in certain case. If you are not using Apache with mod_deflate or nginx in front of your web application, you may need to implement resources compression yourself….

    Wait! don’t start writing your own filter to compress files like CSS, html, txt, javascript it is way more difficult than you think to properly handle http response headers and do proper handling of mime type and caching. In one sentence don’t start reinventing the wheel: use ehcache for example.

    Ehcache is an open source, standards-based cache used to boost performance, offload the database and simplify scalability. Ehcache is robust, proven and full-featured and this has made it the most widely-used Java-based cache. It can scale from in-process with one or more nodes through to a mixed in-process/out-of-process configuration with terabyte-sized caches. For applications needing a coherent distributed cache, Ehcache uses the open source Terracotta Sever Array.

    in the pom.xml of your project add the following dependency to ehcache-web

    <dependency>
        <groupId>net.sf.ehcache</groupId>
        <artifactId>ehcache-web</artifactId>
        <version>2.0.4</version>
    </dependency>

    in your web.xml, add a filter and configure it properly

    <filter>
     <filter-name>CompressionFilter</filter-name>
     <filter-class>net.sf.ehcache.constructs.web.filter.GzipFilter</filter-class>
    </filter>
    <filter-mapping>
     <filter-name>CompressionFilter</filter-name>
     <url-pattern>*.css</url-pattern>
    </filter-mapping>
    <filter-mapping>
     <filter-name>CompressionFilter</filter-name>
     <url-pattern>*.html</url-pattern>
    </filter-mapping>
    <filter-mapping>
     <filter-name>CompressionFilter</filter-name>
     <url-pattern>*.js</url-pattern>
    </filter-mapping>
    <filter-mapping>
     <filter-name>CompressionFilter</filter-name>
     <url-pattern>/*</url-pattern>
    </filter-mapping>

    Read more at EhCache Web Caching page.

    As a bonus, I provide you also below the configuration for the famous challenger HTTP server nginx

     ##
     # Gzip Settings
     ##
     gzip  on;
     gzip_http_version 1.1;
     gzip_vary on;
     gzip_comp_level 6;
     gzip_proxied any;
     gzip_types text/plain text/css application/json application/x-javascript \
    text/xml application/xml application/xml+rss text/javascript \
    application/javascript text/x-js; gzip_buffers 16 8k; gzip_disable "MSIE [1-6]\.(?!.*SV1)";

    &160;

    or for the number one HTTP server Apache using mod deflate /etc/apache2/conf.d/deflate.conf

    <Location />
    # Insert filter
    SetOutputFilter DEFLATE
    
    AddOutputFilterByType DEFLATE text/plain
    AddOutputFilterByType DEFLATE text/xml
    AddOutputFilterByType DEFLATE application/xhtml+xml
    AddOutputFilterByType DEFLATE text/css
    AddOutputFilterByType DEFLATE application/xml
    AddOutputFilterByType DEFLATE image/svg+xml
    AddOutputFilterByType DEFLATE application/rss+xml
    AddOutputFilterByType DEFLATE application/atom_xml
    AddOutputFilterByType DEFLATE application/x-javascript
    AddOutputFilterByType DEFLATE text/html
    
    # Netscape 4.x has some problems...
    BrowserMatch ^Mozilla/4 gzip-only-text/html
    
    # Netscape 4.06-4.08 have some more problems
    BrowserMatch ^Mozilla/4\.0[678] no-gzip
    
    # MSIE masquerades as Netscape, but it is fine
    BrowserMatch \bMSIE !no-gzip !gzip-only-text/html
    # Don't compress images
    SetEnvIfNoCase Request_URI \
    \.(?:gif|jpe?g|png)$ no-gzip dont-vary
    
    # Make sure proxies don't deliver the wrong content
    Header append Vary User-Agent env=!dont-vary
    </Location>
  • We are working since 3 days on tuning a big application: 

    • Client server enterprise grade application,
    • Run on 2 JVM  with 4Gb (Tomcat/Application server) of RAM each!
    • Run on 2 Double core AMD 64 bits server,
    • Linux 64bits,
    • Has a lot of parallel users and > 10000 are registered
    • Use a product meta model which separate definition from implementation data.
    • Java server faces, java, ajax
     This application  is just consuming too much memory for the offline version. Our objective is to make that big application run

    • The same code as above,
    • In windows XP, 
    • IBM T40,  Intel Pentium M 1.6 GHz,  DDR266/PC2100
    • 1 JVM with 500Mb in Tomcat,
    • 1 GB of physical Ram,
    • 1 Desktop user who may run also Lotus Notes, Microsoft Office at the same time...
    There is already a lot of good resources and valuable advices on internet (Google is your friend :-)). Before digging in the code, and since the code is already productive,  we have done some tuning on component first.
    By tuning each components involved one after the other, this follow the principle: Lets do some quick win first before changing algorithm and increasing risk of breaking something....
    In order to back up each changes made with some statistics, the first step was to develop a testcase with Web Stress Tool(Commercial) but Apache JMETER (... replace with your favorite web testing tool) would have do the job

    At the OS Level

    by trying to convince the company to turn the anti virus off on some files and directory. They were scanning XHTML, javascript, XML, class files, images, so nearly everything... during EACH file access. Note the user has no windows right to alter files.


    MySQL 5(we are already using the latest 5.X branch by luck)

    By removing TCP database access and using name pipe only (+30 to +50% performances),

    By Installing MySQL Enterprise Advisor and Monitor. (You can request a free trial key here) and looking at what the advisor recommend. Attention this tool has been developed for monitoring servers, some recommendations are simply not always usable. In our case we are constrained by the memory, remember less than 500Mb, so we did not blindly follow advices. Basic stuff were done, like adding indexes (were it make sense to avoid full tables scan and reduce slow queries), increasing  buffers,

    By switching to myISAM (multi threaded with table locking) instead of innoDB (multi threaded with row locking), and also avoiding other storage engine to run with different algorithm to run in parallel..

    MyISAM is the default storage engine for the MySQL relational database management system. It is based on the older ISAM code but has many useful extensions. In recent MySQL versions, the InnoDB engine has widely started to replace MyISAM due to its support for transactions, referential integrity constraints, and higher concurrency.Each MyISAM table is stored on disk in three files. The files have names that begin with the table name and have an extension to indicate the file type. MySQL uses a .frm file to store the definition of the table, but this file is not a part of the MyISAM engine, but instead is a part of the server. The data file has a .MYD (MYData) extension. The index file has a .MYI (MYIndex) extension. [WikiPedia]

    InnoDB is a storage engine for MySQL, included as standard in all current binaries distributed by MySQL AB. Its main enhancement over other storage engines available for use with MySQL is ACID-compliant transaction support, similar to PostgreSQL, along with declarative referential integrity (foreign key support). InnoDB became a product of Oracle Corporation after their acquisition ofInnobase Oy, in October 2005. The software is dual licensed. It is distributed under the GNU General Public License, but can also be licensed to parties wishing to combine InnoDB in proprietary software. [WikiPedia]

    What are the differences, and you may want also to use myISAM for mono user applications...
    1. InnoDB recovers from a crash or other unexpected shutdown by replaying its logs. MyISAM must fully scan and repair or rebuild any indexes or possibly tables which had been updated but not fully flushed to disk. Since the InnoDB approach is approximately fixed time while the MyISAM time grows with the size of the data files, InnoDB offers greater perceived availability and reliability as database sizes grow.
    2. MyISAM relies on the operating system for caching reads and writes to the data rows while InnoDB does this within the engine itself, combining the row caches with the index caches. Dirty (changed) database pages are not immediately sent to the operating system to be written by InnoDB, which can make it substantially faster than MyISAM in some situations.
    3. InnoDB stores data rows physically in primary key order while MyISAM typically stores them mostly in the order in which they are added. This corresponds to the MS SQL Server feature of “Clustered Indexes” and the Oracle feature known as "index organized tables." When the primary key is selected to match the needs of common queries this can give a substantial performance benefit. For example, customer bank records might be grouped by customer in InnoDB but by transaction date with MyISAM, so InnoDB would likely require fewer disk seeks and less RAM to retrieve and cache a customer account history. On the other hand, inserting data in orders that differ substantially from primary key (PK) order will presumably require that InnoDB do a lot of reordering of data in order to get it into PK order. This places InnoDB at a slight disadvantage in that it does not permit insertion order based table structuring.
    4. InnoDB currently does not provide the compression and terse row formats provided by MyISAM, so both the disk and cache RAM required may be larger. A lower overhead format is available for MySQL 5.0, reducing overhead by about 20% and use of page compression is planned for a future version.
    5. When operating in fully ACID-compliant modes, InnoDB must do a flush to disk at least once per transaction, though it will combine flushes for inserts from multiple connections. For typical hard drives or arrays, this will impose a limit of about 200 update transactions per second. If you require higher transaction rates, disk controllers with write caching and battery backup will be required in order to maintain transactional integrity. InnoDB also offers several modes which reduce this effect, naturally leading to a loss of transactional integrity. MyISAM has none of this overhead, but only because it does not support transactions. [WikiPedia]
    For us the speed of myISAM is clearly balancing the drawback for a desktop applications.

    JSF tuning

    Obvious settings here:, JSF is lacking more fine tuning settings. Serialization is occurring during the model life cycle and consume memory and CPU. We may dig deeply later.
    • javax.faces.STATE_SAVING_METHOD to server
    • org.apache.myfaces.COMPRESS_STATE_IN_SESSIONto true since memory is the biggest constraint for us
    • org.apache.myfaces.NUMBER_OF_VIEWS_IN_SESSION to 0
    • facelets.BUFFER_SIZE to 8192

    Tomcat tuning

    Nothing big can be done here...For me Tomcat is really missing a dynamic web application loader:  Tomcat is simply installing all applications found in /webapps at startup even if they are not used. They are never remove from memory or serialized to disk. Tomcat 4.1 seems to have a memory footprint of 22 Mb, going to the latest Tomcat 6.0 is a too big changes for us now, but we might reconsider it in the future. Removing java library which are not use from WEB-INF/libby trial and error can save some precious Bytes through as it is pretty common when you use frameworks to have jar not desired. For example: junit.jar, jdbc drivers, jms.jar,...Moving common lib to shared/lib may also help remove duplicate jars from webapps class loader and memory.


    JVM tuning

    Java 1.5 and java 1.6 have made a lot of progress, and the JIT compiler found in JAVA 1.5/1.6 is getting  more and more aggressive...The basic rule is to turn the GC JVM log on (by adding -Xloggc:<file> [-XX:+PrintGCDetails]) and analyze it offline with a tool like GCViewer (free). The JIT is doing a pretty good job as the application run more and more faster with the time, but it is just a feeling ;-)
    By analyzing the GC logs we were able to optimize and avoid big mis configurations mistakes, one more time a lot of articles and books are available on how to tune a JVM. Sadly java has no advisor at the moment or is not using genetic algorithms to tune itself...It remains a dream for now.

    By using an empiric approach, which means:
    1 changing JVM parameters -> running test cases ->deciding if we give CPU away or minimize RAM usage -> go back to 1

    We come down to the following  exotic parameters (Xms and Xmx are not of any help since it is really depending on your application and how memory is managed internally)

     -XX:+AggressiveOpts -XX:-UseConcMarkSweepGC

    By the way I use them in Eclipse + JDK 1.6 since months. This page A Collection of JVM Options compiled by: Joseph D. Mocker (Sun Microsystems, Inc.) has been of a great help during this stage.

    Still not enough, we were forced to profile the java code and make some big changes....
  • joomla_cms

    I did optimize a bit my Joomla! homepage in the last few days. This has been achieved with

    • Enabling Joomla module caching in all 3rd party module where it was missing/ not implemented at all,
    • Starting to offload some assets (JavaScript) to faster hosting,

    Click read more to apply the same for your internet site.

    Use Joomla Module caching

    Not all 3rd party Joomla modules are using caching. This means that in worst case, some Joomla! modules may create way too much SQL queries. A way to reduce the load is to activate module caching. You’ll have to go through all 3rd party modules and check that they have in their administrator panel a setting to enable/disable cache.

    jooomla.enable.caching

    You’ll see that 90% of all modules (except official Joomla! modules which are able to deal with caching) are NOT supporting caching. We will change that now:

    For every module without cache, open the xml file at /modules/mod_xxxxxxx/mod_xxxxxxx.xml and add between <params> .. </params>

    <param name="cache" type="radio" default="0" label="Enable Cache" 
           description="Select whether to cache the content of this module">
     <option value="0">No</option>
     <option value="1">Yes</option>
    </param>

    Note that if <params> .. </params> do not exist, just add it like below

    <params>
     <param name="cache" type="radio" default="0" label="Enable Cache" 
           description="Select whether to cache the content of this module">
      <option value="0">No</option>
      <option value="1">Yes</option>
     </param>
    </params>

    Visit or reload the admin panel of that module and set the Enable Cache to Yes.Click Save/Apply at least once.

    Now the output of this module will be saved in /cache and only refresh when global Joomla cache timeout (900 seconds as default). Consider also contacting the author of the module so he can patch his code.

    Offload assets

    Offloading assets (JavaScript, static images, static files) can bring tremendous speed gains, at the cost of resolving more DNS name. Using this technique will help your Apache concentrate on php instead of streaming static data.

    Offload JavaScript

    When you look at Joomla! frontend source code, you will see that the JavaScript library mootols.js is 74kb big. Google is offering to host all major AJAX  libraries free of charge at http://code.google.com/apis/ajaxlibs/documentation/ so why not profiting of their datacenter speed/bandwidth/response time?

    Now the dirty part, You can’t tell Joomla! not to include the mootols.js from /media/system/js/mootools.js at rendering time. We will have to patch the code of Joomla!

    open /libraries/joomla/html/html/behavior.php and search for

    if ($debug || $konkcheck) {
      JHTML::script('mootools-uncompressed.js', 'media/system/js/', false);
    } else {
     //JHTML::script('mootools.js', 'media/system/js/', false); // old Joomla code
     JHTML::script('mootools-yui-compressed.js', 'http://ajax.googleapis.com/ajax/libs/mootools/1.11/', false);
    }

    Joomla use mootools.js in version 1.11, don’t use the latest version (1.2.3) as most Joomla! plugin wont work (but your mileage may vary).

    To be continued

  • The Alternative PHP Cache (APC) is a free and open opcode cache for PHP. It was conceived of to provide a free, open, and robust framework for caching and optimizing PHP intermediate code. from http://nl2.php.net/apc

    Links:
    Unpack your distribution file. You will have downloaded a file named something like apc_x.y.tar.gz. Unzip this file with a command like
    # wget http://pecl.php.net/get/APC-3.0.14.tgz
    # gunzip apc_x.y.tar.gz

    Next you have to untar it with
    #  tar xvf apc_x.y.tar

    This will create an apc_x.y directory. cd into this new directory:
    # cd apc_x.y

    phpize is a script that should have been installed with PHP, and is normally located in /usr/local/php/binassuming you installed PHP in /usr/local/php. (If you do not have the phpize script, you must reinstall PHP and be sure not to disable PEAR.).

    Locate phpize:
    # find / -name phpize

    mine is in
     /etc/alternatives/phpize
    yours may be in /usr/local/php/bin/phpize

    Run the phpize command:
    #  /usr/local/php/bin/phpize

    Its output should resemble this:
            Configuring for:
              PHP Api Version:   20020918
              Zend Module Api No:   20020429
              Zend Extension Api No:   20021010


    phpize should create a configure script in the current directory. If you get errors instead, you might be missing some required development tools, such as autoconf or libtool. You can try downloading the latest versions of those tools and running phpize again.

    Run the configure script.

    phpize creates a configure script. The only option you need to specify is the location of your php-config script.

    find location of php-config
    # find / -name php-config
    then
    # ./configure --enable-apc --enable-apc-mmap --with-apxs --with-php-config=/etc/alternatives/php-config

    php-config should be located in the same directory as phpize.
    If you prefer to use mmap instead of the default IPC shared memory support,  add --enable-apc-mmap to your configure line.

        If you prefer to use sysv IPC semaphores over the safer fcntl() locks, add --enable-sem to your configure line.  If you don't have a problem
        with your server segaulting, or any other unnatural accumulation of semaphores on your system, the semaphore based locking is slightly faster.

    Compile and install the files.
    Simply type:
    # make install
    Installing shared extensions:     /usr/lib/php5/extensions/


    Suggested Configuration (in your php.ini file)

      extension=apc.so
      apc.enabled=1
      apc.shm_segments=1
      apc.shm_size=128
      apc.ttl=7200
      apc.user_ttl=7200
      apc.num_files_hint=1024
      apc.mmap_file_mask=/tmp/apc.XXXXXX
      apc.enable_cli=1



    Although the default APC settings are fine for many installations, serious
    users should consider tuning the following parameters:

        OPTION                  DESCRIPTION
        ------------------      --------------------------------------------------
        apc.enabled             This can be set to 0 to disable APC. This is
                                primarily useful when APC is statically compiled
                                into PHP, since there is no other way to disable
                                it (when compiled as a DSO, the zend_extension
                                line can just be commented-out).
                                (Default: 1)

        apc.shm_segments        The number of shared memory segments to allocate
                                for the compiler cache. If APC is running out of
                                shared memory but you have already set
                                apc.shm_size as high as your system allows, you
                                can try raising this value.  Setting this to a
                                value other than 1 has no effect in mmap mode
                                since mmap'ed shm segments don't have size limits.
                                (Default: 1)

        apc.shm_size            The size of each shared memory segment in MB.
                                By default, some systems (including most BSD
                                variants) have very low limits on the size of a
                                shared memory segment.
                                (Default: 30)

        apc.optimization        This option has been deprecated.
                                (Default: 0)

        apc.num_files_hint      A "hint" about the number of distinct source files
                                that will be included or requested on your web
                                server. Set to zero or omit if you're not sure;
                                this setting is mainly useful for sites that have
                                many thousands of source files.
                                (Default: 1000)

        apc.user_entries_hint   Just like num_files_hint, a "hint" about the number
                                of distinct user cache variables to store.
                                Set to zero or omit if you're not sure;
                                (Default: 4096)

        apc.ttl                 The number of seconds a cache entry is allowed to
                                idle in a slot in case this cache entry slot is
                                needed by another entry.  Leaving this at zero
                                means that your cache could potentially fill up
                                with stale entries while newer entries won't be
                                cached.
                                (Default: 0)

        apc.user_ttl            The number of seconds a user cache entry is allowed
                                to idle in a slot in case this cache entry slot is
                                needed by another entry.  Leaving this at zero
                                means that your cache could potentially fill up
                                with stale entries while newer entries won't be
                                cached.
                                (Default: 0)


        apc.gc_ttl              The number of seconds that a cache entry may
                                remain on the garbage-collection list. This value
                                provides a failsafe in the event that a server
                                process dies while executing a cached source file;
                                if that source file is modified, the memory
                                allocated for the old version will not be
                                reclaimed until this TTL reached. Set to zero to
                                disable this feature.
                                (Default: 3600)

     apc.cache_by_default    On by default, but can be set to off and used in
                                conjunction with positive apc.filters so that files
                                are only cached if matched by a positive filter.
                                (Default: On)

        apc.filters             A comma-separated list of POSIX extended regular
                                expressions. If any pattern matches the source
                                filename, the file will not be cached. Note that
                                the filename used for matching is the one passed
                                to include/require, not the absolute path.  If the
                                first character of the expression is a + then the
                                expression will be additive in the sense that any
                                files matched by the expression will be cached, and
                                if the first character is a - then anything matched
                                will not be cached.  The - case is the default, so
                                it can be left off.
                                (Default: "")

        apc.mmap_file_mask      If compiled with MMAP support by using --enable-mmap
                                this is the mktemp-style file_mask to pass to the
                                mmap module for determing whether your mmap'ed memory
                                region is going to be file-backed or shared memory
                                backed.  For straight file-backed mmap, set it to
                                something like /tmp/apc.XXXXXX (exactly 6 X's).
                                To use POSIX-style shm_open/mmap put a ".shm"
                                somewhere in your mask.  eg.  "/apc.shm.XXXXXX"
                                You can also set it to "/dev/zero" to use your
                                kernel's /dev/zero interface to anonymous mmap'ed
                                memory.  Leaving it undefined will force an
                                anonymous mmap.
                                (Default: "")

        apc.slam_defense        ** DEPRECATED - Use apc.write_lock instead **
                                On very busy servers whenever you start the server or
                                modify files you can create a race of many processes
                                all trying to cache the same file at the same time.
                                This option sets the percentage of processes that will
                                skip trying to cache an uncached file.  Or think of it
                                as the probability of a single process to skip caching.
                                For example, setting this to 75 would mean that there is
                                a 75% chance that the process will not cache an uncached
                                file.  So the higher the setting the greater the defense
                                against cache slams.  Setting this to 0 disables this
                                feature.
                                (Default: 0)

        apc.file_update_protection
                                When you modify a file on a live web server you really
                                should do so in an atomic manner.  That is, write to a
                                temporary file and rename (mv) the file into its permanent
                                position when it is ready.  Many text editors, cp, tar and
                                other such programs don't do this.  This means that there
                                is a chance that a file is accessed (and cached) while it
                                is still being written to.  This file_update_protection
                                setting puts a delay on caching brand new files.  The
                                default is 2 seconds which means that if the modification
                                timestamp (mtime) on a file shows that it is less than 2
                                seconds old when it is accessed, it will not be cached.
                                The unfortunate person who accessed this half-written file
                                will still see weirdness, but at least it won't persist.
                                If you are certain you always atomically update your files
                                by using something like rsync which does this correctly, you
                                can turn this protection off by setting it to 0.  If you
                                have a system that is flooded with io causing some update
                                procedure to take longer than 2 seconds, you may want to
                                increase this a bit.
                                (Default: 2)

        apc.enable_cli          Mostly for testing and debugging.  Setting this enables APC
                                for the CLI version of PHP.  Normally you wouldn't want to
                                create, populate and tear down the APC cache on every CLI
                                request, but for various test scenarios it is handy to be
                                able to enable APC for the CLI version of APC easily.
                                (Default: 0)

        apc.max_file_size       Prevents large files from being cached.
                                (Default: 1M)

        apc.stat                Whether to stat the main script file and the fullpath
                                includes.  If you turn this off you will need to restart
                                                                                            

     apc.write_lock          On busy servers when you first start up the server, or when
                                many files are modified, you can end up with all your processes
                                trying to compile and cache the same files.  With write_lock
                                enabled, only one process at a time will try to compile an
                                uncached script while the other processes will run uncached
                                instead of sitting around waiting on a lock.
                                (Default: 1)

        apc.report_autofilter   Logs any scripts that were automatically excluded from being
                                cached due to early/late binding issues.
                                (Default: 0)

        apc.rfc1867             RFC1867 File Upload Progress hook handler is only available
                                if you compiled APC against PHP 5.2.0 or later.  When enabled
                                any file uploads which includes a field called
                                APC_UPLOAD_PROGRESS before the file field in an upload form
                                will cause APC to automatically create an upload_<key>
                                user cache entry where <key> is the value of the
                                APC_UPLOAD_PROGRESS form entry.

                                Note that the file upload tracking is not threadsafe at this
                                point, so new uploads that happen while a previous one is
                                still going will disable the tracking for the previous.
                                (Default: 0)

        apc.localcache          This enables a lock-free local process shadow-cache which
                                reduces lock contention when the cache is being written to.
                                (Default: 0)

        apc.localcache.size     The size of the local process shadow-cache, should be set to
                                a sufficently large value, approximately half of num_files_hint.
                                (Default: 512)

        apc.include_once_override
                                Optimize include_once and require_once calls and avoid the
                                expensive system calls used.
                                (Default: 0)