Pages

Sunday, April 01, 2012

How to setup MySQL replication broken between Master Slave MySQL Multi-Master Configuration

1. cd ~/mysql
2. Create one config file (my_backup.cnf) for backup. Sample config files given below:

a. Your original config file contents (my.cnf):

datadir = ~/mysql/data
innodb_data_home_dir = ~/mysql/data
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = ~/mysql/data
set-variable = innodb_log_files_in_group=2
set-variable = innodb_log_file_size=20M

b. New backup config file contents (my_backup.cnf):

datadir = ~/mysql/backup
innodb_data_home_dir = ~/mysql/backup
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = ~/mysql/backup
set-variable = innodb_log_files_in_group=2
set-variable = innodb_log_file_size=20M

3. Take backup using following script:

mysql]# bin/ibbackup conf/my.cnf conf/my_backup.cnf

4. Backup folder will look something like this:

$ ls -lh ~/mysql/backup
total 38M
-rw-r-----    1 sqladmin    sqladmin           12M Jan 21 18:40 ibbackup_logfile
-rw-r-----    1 sqladmin    sqladmin           14M Jan 21 18:35 ibdata1.ibz
-rw-r-----    1 sqladmin    sqladmin          8.8M Jan 21 18:37 ibdata2.ibz
-rw-r-----    1 sqladmin    sqladmin          2.2M Jan 21 18:40 ibdata3.ibz

5. Now we apply the log to get the backup stable with any changes that happened while we were taking backup.

mysql]# bin/ibbackup --apply-log conf/my_backup.cnf

InnoDB Hot Backup version 3.0.0; Copyright 2002-2005 Innobase Oy
...
ibbackup: Last MySQL binlog file position 0 11751329, file name ./mysql-bin.000030
ibbackup: The first data file is '~/mysql/backup/ibdata1'
ibbackup: and the new created log files are at '~/mysql/backup/'
081107 15:42:17  ibbackup: Full backup prepared for recovery successfully!

6. Note:

Following line is important. Save it for future reference:

"ibbackup: Last MySQL binlog file position 0 11751329, file name ./mysql-bin.000030"

7. Now copy the backup to the slave machine (preferrably, by tarring all backup files).
8. Stop the slave server, and put these backup files into the mysql/data directory
9. Please make sure they are all `chown sqladmin:sqladmin`.
10. Add the directive `skip-slave-start` into the conf/my.cnf file (to prevent mysql from being slave, keep replication stopped.
11. Save conf/my.cnf file.
12. Start mysql.
13. Connect to mysql prompt:

mysql]# bin/mysql --defaults-file=conf/my.cnf

14. Check the slave is stopped:

sqladmin@localhost [(none)]>stop slave;

15. Update the master config using:

sqladmin@localhost [(none)]>CHANGE MASTER TOMASTER_LOG_FILE='mysql-bin.000030',MASTER_LOG_POS=11751329;

The two values are the ones which you have noted down before.

16. Now start replication:

sqladmin@localhost [(none)]>start slave;

17. And check replication status:

sqladmin@localhost [(none)]>show slave status\G;


(should look like)

sqladmin@localhost [(none)]>show slave status\G;
*************************** 1. row ***************************
             Slave_IO_State: Waiting for master to send event
                Master_Host: v-cps3.persistent.co.in
                Master_User: sqladmin_repl
                Master_Port: 3306
              Connect_Retry: 60
            Master_Log_File: mysql-bin.000030
        Read_Master_Log_Pos: 12855143
             Relay_Log_File: v-cps3-relay-bin.000002
              Relay_Log_Pos: 11726383
      Relay_Master_Log_File: mysql-bin.000030
           Slave_IO_Running: Yes
          Slave_SQL_Running: Yes
            Replicate_Do_DB:
        Replicate_Ignore_DB:
         Replicate_Do_Table:
     Replicate_Ignore_Table:
    Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
                 Last_Errno: 0
                 Last_Error:
               Skip_Counter: 0
        Exec_Master_Log_Pos: 12855143
            Relay_Log_Space: 11726383
            Until_Condition: None
             Until_Log_File:
              Until_Log_Pos: 0
         Master_SSL_Allowed: No
         Master_SSL_CA_File:
         Master_SSL_CA_Path:
            Master_SSL_Cert:
          Master_SSL_Cipher:
             Master_SSL_Key:
      Seconds_Behind_Master: 0

18. Finally, remove that skip-slave-start directive from the conf/my.cnf file.

robots.txt: For Indexing and Crawling of Search Engines

Although the robots.txt file is a very important file if you want to have a good ranking on search engines, many Web sites don't offer this file.
If your Web site doesn't have a robots.txt file yet, read on to learn how to create one. If you already have a robots.txt file, read our tips to make sure that it doesn't contain errors.

What is robots.txt?
When a search engine crawler comes to your site, it will look for a special file on your site. That file is called robots.txt and it tells the search engine spider, which Web pages of your site should be indexed and which Web pages should be ignored.
The robots.txt file is a simple text file (no HTML), that must be placed in your root directory, for example:
    http://www.yourwebsite.com/robots.txt

How do I create a robots.txt file?
As mentioned above, the robots.txt file is a simple text file. Open a simple text editor to create it. The content of a robots.txt file consists of so-called "records".
A record contains the information for a special search engine. Each record consists of two fields: the user agent line and one or more Disallow lines. Here's an example:
    User-agent: googlebot
    Disallow: /cgi-bin/
This robots.txt file would allow the "googlebot", which is the search engine spider of Google, to retrieve every page from your site except for files from the "cgi-bin" directory. All files in the "cgi-bin" directory will be
ignored by googlebot.
The Disallow command works like a wildcard. If you enter
    User-agent: googlebot
    Disallow: /support
both "/support-desk/index.html" and "/support/index.html" as well as all other files in the "support" directory would not be indexed by search engines.
If you leave the Disallow line blank, you're telling the search engine that all files may be indexed. In any case, you must enter a Disallow line for every User-agent record.
If you want to give all search engine spiders the same rights, use the following robots.txt content:
    User-agent: *
    Disallow: /cgi-bin/

Where can I find user agent names?
You can find user agent names in your log files by checking for requests to robots.txt. Most often, all search engine spiders should be given the same rights. in that case, use "User-agent: *" as mentioned above.

Things you should avoid
If you don't format your robots.txt file properly, some or all files of your Web site might not get indexed by search engines. To avoid this, do the following:
  1. Don't use comments in the robots.txt file

    Although comments are allowed in a robots.txt file, they might confuse some search engine spiders.

    "Disallow: support # Don't index the support directory" might be misinterepreted as "Disallow: support#Don't index the support directory".


  2. Don't use white space at the beginning of a line. For example, don't write

    placeholder User-agent: *
    place Disallow: /support

    but

    User-agent: *
    Disallow: /support


  3. Don't change the order of the commands. If your robots.txt file should work, don't mix it up. Don't write

    Disallow: /support
    User-agent: *

    but

    User-agent: *
    Disallow: /support


  4. Don't use more than one directory in a Disallow line. Do not use the following

    User-agent: *
    Disallow: /support /cgi-bin/ /images/

    Search engine spiders cannot understand that format. The correct syntax for this is

    User-agent: *
    Disallow: /support
    Disallow: /cgi-bin/
    Disallow: /images/


  5. Be sure to use the right case. The file names on your server are case sensitve. If the name of your directory is "Support", don't write "support" in the robots.txt file.


  6. Don't list all files. If you want a search engine spider to ignore all files in a special directory, you don't have to list all files. For example:

    User-agent: *
    Disallow: /support/orders.html
    Disallow: /support/technical.html
    Disallow: /support/helpdesk.html
    Disallow: /support/index.html

    You can replace this with

    User-agent: *
    Disallow: /support


  7. There is no "Allow" command

    Don't use an "Allow" command in your robots.txt file. Only mention files and directories that you don't want to be indexed. All other files will be indexed automatically if they are linked on your site.

Tips and tricks:
1. How to allow all search engine spiders to index all files
    Use the following content for your robots.txt file if you want to allow all search engine spiders to index all files of your Web site:
    User-agent: *
    Disallow:
2. How to disallow all spiders to index any file
    If you don't want search engines to index any file of your Web site, use the following:
    User-agent: *
    Disallow: /
3. Where to find more complex examples.
    If you want to see more complex examples, of robots.txt files, view the robots.txt files of big Web sites:
Your Web site should have a proper robots.txt file if you want to have good rankings on search engines. Only if search engines know what to do with your pages, they can give you a good ranking.

JVM Garbage Collector Approaches

Two basic approaches to distinguishing live objects from garbage are reference counting and tracing. Reference counting garbage collectors distinguish live objects from garbage objects by keeping a count for each object on the heap. The count keeps track of the number of references to that object. Tracing garbage collectors actually trace out the graph of references starting with the root nodes. Objects that are encountered during the trace are marked in some way. After the trace is complete, unmarked objects are known to be unreachable and can be garbage collected.

1. Reference Counting Collectors:

Reference counting was an early garbage collection strategy. In this approach, a reference count is maintained for each object on the heap. When an object is first created and a reference to it is assigned to a variable, the object's reference count is set to one. When any other variable is assigned a reference to that object, the object's count is incremented. When a reference to an object goes out of scope or is assigned a new value, the object's count is decremented. Any object with a reference count of zero can be garbage collected. When an object is garbage collected, any objects that it refers to have their reference counts decremented. In this way the garbage collection of one object may lead to the subsequent garbage collection of other objects.

An advantage of this approach is that a reference counting collector can run in small chunks of time closely interwoven with the execution of the program. This characteristic makes it particularly suitable for real-time environments where the program can't be interrupted for very long. A disadvantage is that reference counting does not detect cycles: two or more objects that refer to one another. An example of a cycle is a parent object that has a reference to a child object that has a reference back to the parent. These objects will never have a reference count of zero even though they may be unreachable by the roots of the executing program. Another disadvantage of reference counting is the overhead of incrementing and decrementing the reference count each time.

Because of the disadvantages inherent in the reference counting approach, this technique is currently out of favor. It is more likely that the Java virtual machines you encounter in the real world will use a tracing algorithm in their garbage-collected heaps.

2. Tracing Collectors:

Tracing garbage collectors trace out the graph of object references starting with the root nodes. Objects that are encountered during the trace are marked in some way. Marking is generally done by either setting flags in the objects themselves or by setting flags in a separate bitmap. After the trace is complete, unmarked objects are known to be unreachable and can be garbage collected.

The basic tracing algorithm is called "mark and sweep." This name refers to the two phases of the garbage collection process. In the mark phase, the garbage collector traverses the tree of references and marks each object it encounters. In the sweep phase, unmarked objects are freed, and the resulting memory is made available to the executing program. In the Java virtual machine, the sweep phase must include finalization of objects.

Cohesion and Coupling: Two OO Design Principles in Java

Cohesion and Coupling deal with the quality of an OO design. Generally, good OO design calls for loose coupling and high cohesion. The goals of OO designs are to make the application
  • Easy to Create
  • Easy to Maintain
  • Easy to Enhance

Coupling:

Coupling is the degree to which one class knows about another class. Let us consider two classes class A and class B. If class A knows class B through its interface only i.e it interacts with class B through its API then class A and class B are said to be loosely coupled.

If on the other hand class A apart from interacting class B by means of its interface also interacts through the non-interface stuff of class B then they are said to be tightly  coupled. Suppose the developer changes the class B‘s non-interface part i.e non API stuff then in case of loose coupling class A does not breakdown but tight coupling causes the class A to break.

So its always a good OO design principle to use loose coupling between the classes i.e all interactions between the objects in OO system should use the APIs. An aspect of good class and API design is that classes should be well encapsulated.

Cohesion:

Cohesion is used to indicate the degree to which a class has a single, well-focused purpose. Coupling is all about how classes interact with each other, on the other hand cohesion focuses on how single class is designed. Higher the cohesiveness of the class, better is the OO design.

Benefits of Higher Cohesion:
  • Highly cohesive classes are much easier to maintain and less frequently changed.
  • Such classes are more usable than others as they are designed with a well-focused purpose.