database

A database is an organized collection of data. The data is typically organized to model aspects of reality in a way that supports processes requiring information. read more at WikiPedia

  • For "Joe six pack" user to advance users... 

    1. You only want to use Mambo admin panel-EASIEST
    2. Your provider has given You a plesk panel -EASY
    3. Your provider only give You a Telnet or ssh access to the server -ADVANCE USER
    4. You want more! - VERY ADVANCE USER

    I am using the method 4, which isn't more difficult and a lot better, open the script and set the variable according to the internal documentation. Upload the file to the server (not in httpdocs under plesk), chmod the file to 700 (rwx- - - - - -) and define a daily crontab. This script is making daily, weekly, monthy backup and send me a mail with the result and a report...

     1. You only want to use Mambo admin panel-EASIEST

    Install a Mambo component: Site backup from bigAPE

    The Component provides a basic site backup feature set. The following features are currently offered:

    • Ability to backup the entire Mambo file and database system to a compressed file.
    • Ability to select which folders to include and exclude from the backup
    • Ability to download & manage archives of the Mambo file system
    • Ability to generate, download & manage archives of the Mambo mySQL database
    • Archives files are unique to the Mambo installation based on creation time and secret key
    • Ability to email the generated archive file to a specified account (BETA)
    • Backup excludes existing backup sets to conserve space
    • Backwardly compatible with 4.5.1

    Note: this plugin is only backing up your DB and files when You want it. It is a lot BETTER to use a periodical backup strategy

    2. Your provider has given You a plesk panel-EASY

    We will use crontab:

    The crontab command, found in Unix and Unix-like operating systems, is used to schedule commands to be executed periodically. It reads a series of commands from standard input and collects them into a file known also known as a "crontab" which is later read and whose instructions are carried out.

    Go in the crontab section of plesk. (I have use here Plesk 7.5.2)

    Values that are allowed to be use:

    • Minute - 0-59 or *
    • Hour - 0-23 or *
    • Day of the Month - 1-31 or *
    • Month - 1-12 or *
    • Day of the Week - 0-6 (0 is Sunday) or *

    We need at least 3 tasks

    1. A task which extract all data from the database (here starting at 12:00)

    MinuteHourDay of the MonthMonthDay of the WeekCommand
    00***/usr/local/mysql/bin/mysqldump -uXXXX -pYYYY -q -hHOST TABLE > /home/httpd/vhosts/waltercedric.com/private/dbbackups/mambobackup.txt

    -uXXXX replace XXXX with your Mambo username for mysql
    -pYYYY replace YYYY with your Mambo password for mysql
    -hHOSTis yourdomain.com or .net or whatever tld your domain has
    TABLE is the Mambo table You want to backup
    /home/httpd/vhosts/waltercedric.com/private/dbbackups/mambobackup.txt is a file in a directory outside /httpdocs chmod to 700 (or rwx------_)

    2. A task which create a zip of the resulting file (here starting at 12:05)

    MinuteHourDay of the MonthMonthDay of the WeekCommand
    50***gzip -9 -f /home/httpd/vhosts/waltercedric.com/private/dbbackups/mambobackup.txt

    Notes

    1. You can also combine task 2 and 3 in one line and without temporary file (mambobackup.txt) by defining task:
      /usr/local/mysql/bin/mysqldump
      -uXXXX -pYYYY -q -hHOST TABLE | gzip -9 > /home/httpd/vhosts/waltercedric.com/private/dbbackups/mambobackup.txt.gz
    2. File mambobackup.txt is overwritten at each execution of the task -> may be not a goo idea :-(

    3. A task which send an email of the result (here starting at 12:10)

    MinuteHourDay of the MonthMonthDay of the WeekCommand
    100***MAILTO=This email address is being protected from spambots. You need JavaScript enabled to view it. /home/httpd/vhosts/waltercedric.com/private/dbbackups/mambobackup.txt.gz

    This email address is being protected from spambots. You need JavaScript enabled to view it. is the mail recipient

     

    3. Your provider only give You a Telnet or ssh access to the server -ADVANCE USER

    Follow the guide of TVB on Mambers forums, below is

    A copy of this text...All credits to author
    Cronjobbing your mambo (or any mysql db for that matter)--a short tutorial
    This is a script that was put together by another Futurequest site owner that I have changed very slightly for my own purposes

    If you comfortable backing up via command line, then hopefully this will help you. I suspect if you are backing up this way, you already know this stuff.

    If you don't know how to backup via command line or a cron job, it's real easy to follow the instructions (I hope they are---I tried to make them easy).

    Please note this is for a *nix setup and your setup or file structure on your server may be different. I am unable to offer support for it as I only know it works for me, and hopefully it will for you as well. Use at your own risk and enjoyment.

    I'm posting it because I am always seeing a lot of questions on backing up.

    *********************
    Step 1. Using your file manager or your ftp client, create a directory called dbbackups above the www level (for example: /big/dom/xyourdomain/dbbackups) Chmod it to 700.

    .
    Step 2. using notepad or your favorite text editor create a text file with the following contents:

    Code:

    #!/bin/bash /usr/local/mysql/bin/mysqldump -uxyourdomain -pyourpassword -q -hmysql.yourdomain.tld xyourdomain >/big/dom/xyourdomain/dbbackups/mambobackup.txt



    after the -u is your username for mysql
    after the -p is your password for mysql
    after the -hmysql is yourdomain.com or .net or whatever tld your domain has

    Following that is the name of the database you are using to run mambo. If you need help remembering which database you used to run mambo, check your config.php file in your mambo or check with phpMyadmin.
    After the /big/dom/x insert your actual domain name (no tld this time)

    Name this text file mambobackup.sh and install it at the root ( /big/dom/xyourdomain/mambobackup.sh ) & Chomod it 700.

    Step 3. Telnet into your account with your favorite SSH client (I like Putty) using your ftp username and password. On the command line enter /big/dom/xyourdomain/mambobackup.sh and hit enter. Put your actual domain in place of yourdomain in the example.

    Step 4. From your file manager or your ftp client, check the contents of the folder at /big/dom/xyourdomain/dbbackups/ and inside it you should find a file named mambobackup.txt which is a copy of your mysql database used to run your mambo.

    I also use vbulletin and have the same exact script installed substituting "forum" for "mambo" wherever it occurs. Both backup to the dbbackups directory


    Automating your backup:

    Important-if you have a crontab job already running, these instructions will overwrite it. Chances are, if you have one running, and want to make backups automated, you will want to add this to your existing crontab job and then reinstall it.

    Step 1. Using your favorite text editor create a text file with the following contents.

    MAILTO=This email address is being protected from spambots. You need JavaScript enabled to view it.
    5 10 * * * /big/dom/xyourdomain/backupscript.sh


    make sure that after .sh you hit enter at least once to create a line feed. Put your actual domain name and tld in place of yourdomain.tld and put your actual domain in place of xyourdomain. Name this file cronjob.txt and install it above the www, ( /big/dom/xyourdomain/cronjob.txt )Chomod it 700

    Using your favorite telnet client, telnet into your account using your ftp username and password. On the command line enter

    crontab /big/dom/xyourdomain/cronjob.txt

    and hit enter. At 10:05 am server time, each day, a new file will overwrite your old one and you will have a daily, fresh backup made automatically for you at /big/dom/xyourdomain/forumbackup/backup.txt You can of course adjust the timing of the backup, by changing the first two entries of cronjob.txt In the example, the 5 is minutes and the 10 is hours. If you wanted the backup to be done at 11:00 PM each day, you would put 0 23 as the first two numbers. However, as it is shown will work fine.

    One consideration in making backups is server storage space or disk usage.

    Automating your backup and saving disk space by gzipping your backup file

    With 1 more file, and a change in the cronjob.txt file, it's easy to create an automated backup file in in a zipped format (which cuts the space used by about 25%. This is what you will need to add:

    1. Create a text file called zip.sh

    Inside this file, put in the following code, replacing xyourdomain with your actual information.

    Code:

    gzip -9 -f /big/dom/xyourdomain/dbbackups/mambobackup.txt



    (I have a second line of code here doing the same for our vbulletin forums, substitituting forumbackup.txt for mambobackup.txt)

    Install this file above the www ( /big/dom/xyourdomain/zip.sh ) and chomod it 700. This script will make mambobackup.txt into mambobackup.txt.gz

    2. You will now need to modify the cronjob.txt file as follows replacing yourdomain.tld and xyourdomain with your actual information.

    MAILTO=This email address is being protected from spambots. You need JavaScript enabled to view it.
    5 10 * * * /big/dom/xyourdomain/backupscript.sh
    10 10 * * * /big/dom/xyourdomain/zip.sh

    after the last .sh be sure to hit enter at least once to insert a line feed. Install the new file in place of the old one above the www ( /big/dom/xyourdomain/cronjob.txt)

    Telnet into your account from CNC using your ftp username and password and on the command line put

    crontab /big/dom/xyourdomain/cronjob.txt

    and hit enter. Replace yourdomain with your actual domain information.
    What will happen is that at 10:05 AM each day, a file called backup.txt will be created in the forumbackup folder. At 10:10 AM each day, that file will be zipped up and made into backup.txt.gz and then gzip automatically deletes backup.txt. This series of events will happen automatically each day until you cancel the crontab job.

    You may change the timing of this by changing the first two numbers on the 2nd and 3rd lines in the cronjob.txt file to suit yourself. IE to run this at 1:05 PM and finish it at 1:10 PM the crontab.txt file would be

    MAILTO=This email address is being protected from spambots. You need JavaScript enabled to view it.
    5 13 * * * /big/dom/xyourdomain/backupscript.sh
    10 13 * * * /big/dom/xyourdomain/zip.sh

    Depending upon your site's busy times, it is best to schedule a backup during slower times of useage. When the backup is being made by the mambobackup.sh script, the site or forum is locked and on a larger site, this could last for 20-30 seconds while myslq dumps the data. You won't lose any data or posts, but your site may appear to be sluggish during this time.

    Betsy

     

    4. You want more! - VERY ADVANCE USER

    You can use the open source project:AutoMySQLBackup

    A script to take daily, weekly and monthly backups of your MySQL databases using mysqldump. Features - Backup mutiple databases - Single backup file or to a seperate file for each DB - Compress backup files - Backup remote servers - E-mail logs -

    • Backup mutiple MySQL databases with one script. (Now able to backup ALL databases on a server easily. no longer need to specify each database seperately)
    • Backup all databases to a single backup file or to a seperate directory and file for each database.
    • Automatically compress the backup files to save disk space using either gzip or bzip2 compression.
    • Can backup remote MySQL servers to a central server.
    • Runs automatically using cron or can be run manually.
    • Can e-mail the backup log to any specified e-mail address instead of "root". (Great for hosted websites and databases).
    • Can email the compressed database backup files to the specified email address.
    • Can specify maximun size backup to email.
    • Can be set to run PRE and POST backup commands.
    • Choose which day of the week to run weekly backups.
  • One of the most important rule is to always backup your data with multiple tools on different medium, just in case of.

    Just imagine 5 minutes that you lose everything, what you have done in the last 3 months on your homepage..ok still not crying? you loose hundred of hours of work, nightly debugging, customizations, hours of Gimp/Photoshop, lengthy and good written articles...  

    With Joomla! and now with nearly all CMS, the most important thing to backup is the database. I recommend a daily backup at least!

    And now my solution: AutoMySQLBackup

    A script to take daily, weekly and monthly backups of your MySQL databases using mysqldump.

    Features

    • Backup mutiple databases 
    • Single backup file or to a seperate file for each DB
    • Compress backup files
    • Backup remote servers
    • E-mail logs
    • Backup mutiple MySQL databases with one script. (Now able to backup ALL databases on a server easily. no longer need to specify each database seperately)
    • Backup all databases to a single backup file or to a seperate directory and file for each database.
    • Automatically compress the backup files to save disk space using either gzip or bzip2 compression.
    • Can backup remote MySQL servers to a central server.
    • Runs automatically using cron or can be run manually.
    • Can e-mail the backup log to any specified e-mail address instead of "root". (Great for hosted websites and databases).
    • Can email the compressed database backup files to the specified email address.
    • Can specify maximun size backup to email.
    • Can be set to run PRE and POST backup commands.
    • Choose which day of the week to run weekly backups.

    Just download the file, save it somewhere, and configure automysqlbackup with your editor of choice (25 first line of file). To test if everything work,  just run the file.

    Note:

    automysqlbackup  require Mutt 

    # yast2 -i mutt
    Mutt (http://www.mutt.org) is a small but very powerful text-based mail client for Unix operating systems. It is used by the script automysqlbackup (sourceforge.net/projects/automysqlbackup/) to cut mail in part (automysqlbackup do ot use mail in that case)

    automysqlbackup send daily email with database content, so data are not on host in case of crash or on ftp. You may create 2 emails account and forward the result of the backup to 2 different free email hoster (Gmail and Hotmail for example)

  •  apache_maven

    The last year, I was at Jazoon 08, and I forget to tell you how good some of their presentation about Maven were

    Let the Continuous Build Embrace Your Database

    "JUnit tests should not depend on database state." - "Set up your test data before you run your test." - We figure this just does not always scale. Mocking test data for hundreds of tables may not be suitable and database schemes evolve as the application does.
    These are common challenges when developing large J2EE systems. This presentation shows practical approaches for project setups and (functional) tests that decided to depend on a database. Developers and build servers may or may not share the database and support for this should be as simple as possible. We give an overview of what proved to be a good setup for an Eclipse / Maven 2 based development and build environment that relies on an evolving relational schema.

    Read More Here 

    The PDF cannot be downloaded, fortunately  I‘ve made a backup just in case 2 years ago. I did upload the presentation at SlideShare

    Here is the mind map I’ve done during the presentation

    • Continuous build for DB
      • db changes
        • SQL script patches
        • changes in chema
        • different db state for  each trunk tag branches?
        • = hell of synchronization issues
        • they put script in SVN
        • only run modified scripts between each or last build
        • run SQL script against references db before pushing the same changes to prod
        • ex: developer commit, build server poll SVNand launch build, then propagate
        • they use continuum
      • they have made a framework that has some tables more to keep which files .SQL has run
      • and what .sql revision svn it was
      • so they can only run delta scripts
      • ex: version 1.0 in prod, but bug appear
        • -> open a branch
        • -> automatic run of branch sql scripts also to trunk
      • Idempotent
        • but the same script apply twice on different database status do not gibe the same result
          • so they have to make script idempotent by checking/handling all previous versions
        • views ad trigger can be Idempotent easily
      • they have DB quality checks
        • primary keys constraints
        • foreign keys
        • etc..
    • fightning bugs
      • not breaking sql scripts
      • no regressions
    • rerunnable junit functional tests
      • auto rollback junit class
        • their own impl of datasource
          • and connection
      • don’t expect developer to properly rollback called in teardown
      • extends autorollbackjunittestcase.class
      • autorollbacktestcase also existing in spring see spring-test.jar
    • eclipse maven setup
      • for junit tests
      • read junit.properties
      • if any junit-fritz.properties  exist it will use the user config file
        • good idea
        • the file will e committed but wont break continuum build server
      • multi modules
        • different classpath (test and main) between eclipse and maven
        • they use propertes in pom.xml and  variable in properties
          • -> filter
    • done by teslekurs
      • they have 70 modules
      • netcetera.ch
    • make a try
      • go to workspace in dos
        • run in pk common "mvn clean test" it should build common like in teamcity
      • Use spring test framework of spring 2.5
    • outlook
      • only oracle
      • they search good test data among their 1TB data
      • want to use maven in also in eclipse, they use the command line right now
    • ideas
      • they store the script they have run to create the database and their SVN revision in db

        someone in room has propose to keep the data in build and add a column to know if data was created by Junit or by the main code

     

    Database with junit

  • Here is the easiest way to run a daily backup of your database using linux crontab. Thanks to our everyday increasing mailbox size (Thanks Gmail) and their nature to be quite safe for saving document, let’s use them to store the full backup of all our Mysql database!

    Requirements

    • Having a shell access to your linux box
    • Creating a user with limited mysql rights: SELECT and LOCK_TABLES is enough, For example a user backup_user with a password ChhdeqyqUzd75687fOnmYar
    • Installing Mpack: Tools for encoding/decoding MIME messages. Mpack and munpack are utilities for encoding and decoding (respectively) binary files in MIME (Multipurpose Internet Mail Extensions) format mail messages. For compatibility with older forms of transferring binary files, the munpack program can also decode messages in split-uuencoded format.

    Edit your crontab

    crontab -e

    and put inside on one line the following

    0 1 * * * /usr/bin/mysqldump -ubackup_user –pChhdeqyqUzd75687fOnmYar yourdb |
    gzip > /database_`date +'%m-%d-%Y'`.sql.gz ;
    mpack -s "Databases backup"
    -c application/gzip /database_`date +'%m-%d-%Y'`.sql.gz This email address is being protected from spambots. You need JavaScript enabled to view it.

    You can replace the word yourdb with your tablename or --all-databases to dump all database. With the above line a backup will be run at 1AM every day and sent in your mailbox.

  • According to this MySQL page here, you can win 30 to 50% more performances using MySQL jdbc named pipe!

    Named pipes only work when connecting to a MySQL server on the same physical machine as the one the JDBC driver is being used on.
    In simple performance tests, it appears that named pipe access is between 30%-50% fasterthan the standard TCP/IP access.


    As default, when you install mySQL on windows using the installer, TCP IP is the default option. The question remains why nobody seems
    to use named pipe, or has write any articles on internet about it. Lorenz and me were trying nearly 2 hours to make this damn things work.
    We googled on internet, finding nothing, e try and try until we finally succeed. That is the reason why I write this article now.

    We were testing successfuly MySQL 5.0 using named pipe, and what an increase in speed! 50% in the case of this big project
    First it may be a good idea to use the latest JConnector driver from MySQL

    MySQL Connector/J is a native Java driver that converts JDBC (Java Database Connectivity) calls into the network protocol used by
    the MySQL database. It lets developers working with the Java programming language easily build programs and applets that interact
    with MySQL and connect all corporate data, even in a heterogeneous environment. MySQL Connector/J is a Type IV JDBC driver and
    has a complete JDBC feature set that supports the capabilities of MySQL.

    Download Binaries & Source

    Just put the file mysql-connector-java-5.0.7-bin.jar in your classpath. You'll have to sligthly alter the my.ini file as following:

    [mysqld]
    skip-networking
    enable-named-pipe
    socket=mysql.sock


    Add these 3 keys in [mysqld]section of my.ini and restart MySQL. The first key switch the TCP-IP port off, so you wont be able
    now to connect to port 3306. The 2 others remaining just activate MySQL named pipe.

    Verify that everything has work correctly by firing MySQL Query Browser and connect to your database.
    (see details of connection below)
    mysql query browser with named pipe

     For the most intrepid of You of if wou want to add named pipe capabilities to previous saved connections, just can also use the menu "tools" - "manage connections" and under the tab "advanced parameters", just add these 2 new keys

    NAMED_PIPE    value  Yes
    SOCKET_PATH   value   mysql.sock

    We are nearly finished, all we have to do is to alter now the JDBC url, and this is where we fight against Windowstm till we find the path to the mysql.sock file handle.
    For the unpatient of you, I deliver here the solution:

    jdbc:mysql:///database
    ?socketFactory=com.mysql.jdbc.NamedPipeSocketFactory
    &namedPipePath=\\\\.\\Pipe\\mysql.sock


    database being the database/schema name

    socketFactory=com.mysql.jdbc.NamedPipeSocketFactory  JConnector also supports access to MySQL via named pipes on Windows NT/2000/XP using
    the
    NamedPipeSocketFactory as a plugin-socket factory via thesocketFactory property. If you don't use anamedPipePath property, the default
    of '\\.\pipe\MySQL' will be used. If you use the
    NamedPipeSocketFactory, the hostname and port number values in the JDBC url will be ignored.
    You can enable this feature using:   socketFactory=com.mysql.jdbc.NamedPipeSocketFactory

    namedPipePath=\\\\.\\Pipe\\mysql.sock The path to the file socket. Notice how strange the path is looking like under Windowstm (escaping  \ in java is normal).
    Under linux we would have write /var/log/mysql.sock and forget everything. In Windowstm  You really have no chance to find it until  you use
    FileMon(a SysInternals tool)

    FileMonmonitors and displays file system activity on a system in real-time. Its advanced capabilities make it a powerful tool for exploring the way
    Windows works, seeing how applications use the files and DLLs, or tracking down problems in system or application file configurations. Filemon's
    timestamping feature will show you precisely when every open, read, write or delete, happens, and its status column tells you the outcome.
    FileMon
    is so easy to use that you'll be an expert within minutes. It begins monitoring when you start it, and its output window can be saved to a file for off-line
    viewing. It has full search capability, and if you find that you're getting information overload, simply set up one or more filters.
    Download it HERE

    You can use FileMon to filter file by name, search for mysql* and you'll see that strange url. Note the documentation give some advice about this url
    (more or less)  the default of '\\.\pipe\MySQL', would have work if we have name the file MySQL and not mysql.sock

    But wait there is more to learn, MySQL is supporting a wide range of parameters when you open the connection. Just read this page, a lot of settings may also speed your application even more.
  • I publish here some of my server settings in the hope that it will also help others...

    Server Setup

    AMD64 1 Gb RAM, Linux OSS 10.0, 7 Joomla instances (one being waltercedric.com with 250'000 unique visitors per months), 1 simple machine forums, 3 gallery2 install

    All MySQL tables are myISAM (table locking instead of row loacking in innodb, myIsam make sense as ther is more read then insert) 

    # vi  in /etc/my.cnf

    Below the diff command between a standard MySQL install

    # diff my.cnf my-beforeOptimizations.cnf
    30c30
    < key_buffer = 50M
    ---
    > key_buffer = 16M
    32,33c32,33
    < table_cache = 1500
    < sort_buffer_size = 4M
    ---
    > table_cache = 64
    > sort_buffer_size = 512K
    35,46c35,37
    < read_buffer_size = 4M
    < read_rnd_buffer_size = 1024K
    < myisam_sort_buffer_size = 64M
    < join_buffer_size = 4M
    < thread_cache_size = 128
    < wait_timeout = 14400
    < connect_timeout = 10
    < max_connect_errors = 10
    < query_cache_limit = 2M
    < query_cache_size = 128M
    < query_cache_type = 1
    < thread_concurrency=4
    ---
    > read_buffer_size = 256K
    > read_rnd_buffer_size = 512K
    > myisam_sort_buffer_size = 8M
    163,166c154,157
    < key_buffer = 64M
    < sort_buffer = 64M
    < read_buffer = 16M
    < write_buffer = 16M
    ---
    > key_buffer = 20M
    > sort_buffer_size = 20M
    > read_buffer = 2M
    > write_buffer = 2M

  • Digging into mySQL settings can be time consuming, like with any other component. Most of the time, and just by changing a few settings, you can expect a performance increase. The problem is to change what, and  to which value. This is where tuning-primer.sh help you:

    Get this script, http://forge.mysql.com/projects/view.php?id=44 upload it, unzip it, and install it in your /etc folder. Then run it from the command line by entering ./path-to-file/tuning-primer.sh

    MySQL Server must run a few days or weeks, or it wont be be safe to follow these recommendations.

    To find out more information on how each of these runtime variables effects performance visit:
    http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html

    Here is an example of  tuning-primer.sh output

    SLOW QUERIES
    Current long_query_time = 5 sec.
    You have 2856 out of 4725688 that take longer than 5 sec. to complete
    The slow query log is enabled.
    Your long_query_time seems to be fine

    WORKER THREADS

    Current thread_cache_size = 128
    Current threads_cached = 55
    Current threads_per_sec = 0
    Historic threads_per_sec = 0
    Your thread_cache_size is fine

    MAX CONNECTIONS

    Current max_connections = 100
    Current threads_connected = 15
    Historic max_used_connections = 55
    The number of used connections is 55% of the configured maximum.
    Your max_connections variable seems to be fine.

    MEMORY USAGE
    Max Memory Ever Allocated : 305 M
    Configured Max Per-thread Buffers : 1017 M
    Configured Max Global Buffers : 143 M
    Configured Max Memory Limit : 1 G
    Total System Memory : 2.99 G
    Max memory limit seem to be within acceptable norms

    KEY BUFFER
    Current MyISAM index space = 4 M
    Current key_buffer_size = 5 M
    Key cache miss rate is 1 : 3740
    Key buffer fill ratio = 35.00 %
    Your key_buffer_size seems to be too high.
    Perhaps you can use these resources elsewhere


    QUERY CACHE
    Query cache is enabled
    Current query_cache_size = 128 M
    Current query_cache_used = 27 M
    Current query_cach_limit = 2 M
    Current Query cache fill ratio = 21.13 %
    Your query_cache_size seems to be too high.
    Perhaps you can use these resources elsewhere
    MySQL won't cache query results that are larger than query_cache_limit in size


    SORT OPERATIONS
    Current sort_buffer_size = 4 M
    Current record/read_rnd_buffer_size = 1020 K
    Sort buffer seems to be fine

    JOINS
    Current join_buffer_size = 1.00 M
    You have had 7065 queries where a join could not use an index properly
    You should enable "log-queries-not-using-indexes"
    Then look for non indexed joins in the slow query log.
    If you are unable to optimize your queries you may want to increase your
    join_buffer_size to accommodate larger joins in one pass.


    Note! This script will still suggest raising the join_buffer_size when
    ANY joins not using indexes are found.

     

  • ORA-00054: resource busy and acquire with NOWAIT specified
    Cause: Resource interested is busy.
    Action: Retry if necessary.

    I show you here how to get around this error and kill the sessions that are preventing the exclusive lock.

    Note that you could also (preferred)

    • Run your changes later when the database is idle.
    • Do all DDL during a maintenance window when all users are not logged in

    Run in SQL*Plus or SQL developer

    select a.sid, a.serial#  from v$session a, v$locked_object b, dba_objects c 
    where b.object_id = c.object_id 
    and a.sid = b.session_id
    and OBJECT_NAME='YouTableName';

    Then note both the sid and serial and run

    alter system kill  session 'sid,serial#' IMMEDIATE;

    If this still do not work, your last bullet is to kill the Unix process which is still hanging on your database server!

    here is how to find the unix process to kill

    Run in SQL*Plus or SQL developer

    select p.spid,s.sid,s.serial#,s.username,s.status,s.last_call_et,
    p.program,p.terminal,logon_time,module,s.osuser from V$process p,V$session s where s.paddr = p.addr and s.status = 'ACTIVE' and
    s.username not like '%SYS%';

    and finally run a

    kill –9 spid

    if this still don’t not work, your best friend is Google

  • The Oracle error ORA-00054 can be solved by executing as DBA the following query

    select p.spid,s.sid,s.serial#,s.username,s.status,s.last_call_et,
           p.program,p.terminal,logon_time,module,s.osuser 
    from 
           V$process p,V$session s 
    where 
           s.paddr = p.addr and s.status = &39;ACTIVE&39; and 
           s.username not like &39;%SYS%&39;;

    This will return the PID of the hanging process

    You can then SSH to the server ruining oracle and kill the process

    kill -9 PID