Repair the cacti database or any MySQL database.

When you not able to add any new server in cacti. When you not able to view any server graph reading in cacti.
That means your cacti database is corrupted or crashed. Check the MySQL log and confirm the same.

[root@local ~]# tail -f /var/log/mysqld.log
120131 16:25:01 [ERROR] /usr/libexec/mysqld: Table './cacti/poller_item' is marked as crashed and should be repaired
120131 16:25:01 [ERROR] /usr/libexec/mysqld: Table './cacti/poller_item' is marked as crashed and should be repaired
120131 16:25:01 [ERROR] /usr/libexec/mysqld: Table './cacti/poller_item' is marked as crashed and should be repaired
120131 16:25:01 [ERROR] /usr/libexec/mysqld: Table './cacti/poller_item' is marked as crashed and should be repaired
120131 16:25:01 [ERROR] /usr/libexec/mysqld: Table './cacti/poller_item' is marked as crashed and should be repaired
120131 16:25:01 [ERROR] /usr/libexec/mysqld: Table './cacti/poller_item' is marked as crashed and should be repaired
120131 16:25:01 [ERROR] /usr/libexec/mysqld: Table './cacti/poller_item' is marked as crashed and should be repaired
120131 16:25:01 [ERROR] /usr/libexec/mysqld: Table './cacti/poller_item' is marked as crashed and should be repaired
120131 16:25:01 [ERROR] /usr/libexec/mysqld: Table './cacti/poller_item' is marked as crashed and should be repaired
120131 16:25:01 [ERROR] /usr/libexec/mysqld: Table './cacti/poller_item' is marked as crashed and should be repaired

MySQL log showing cacti database is corrupted or crashed. Follow the below steps to repair it.
  
STEP 1:- Check the corrupted table.
[root@localhost ~]# mysqlcheck -c cacti -p
Enter password:
cacti.cdef                                        OK
cacti.cdef_items                                  OK
cacti.colors                                      OK
cacti.data_input                                  OK
cacti.data_input_data
warning  : 3 clients are using or haven't closed the table properly
status   : OK
cacti.data_input_fields                           OK
cacti.data_local
warning  : 3 clients are using or haven't closed the table properly
status   : OK
cacti.data_template                               OK
cacti.data_template_data
warning  : 3 clients are using or haven't closed the table properly
status   : OK
cacti.data_template_data_rra
warning  : 3 clients are using or haven't closed the table properly
status   : OK
cacti.data_template_rrd
warning  : 3 clients are using or haven't closed the table properly
status   : OK
cacti.graph_local
warning  : 3 clients are using or haven't closed the table properly
status   : OK
cacti.graph_template_input                        OK
cacti.graph_template_input_defs                   OK
cacti.graph_templates
warning  : 1 client is using or hasn't closed the table properly
status   : OK
cacti.graph_templates_gprint                      OK
cacti.graph_templates_graph
warning  : 3 clients are using or haven't closed the table properly
status   : OK
cacti.graph_templates_item
warning  : 3 clients are using or haven't closed the table properly
status   : OK
cacti.graph_tree                                  OK
cacti.graph_tree_items
warning  : 3 clients are using or haven't closed the table properly
status   : OK
cacti.host
warning  : 5 clients are using or haven't closed the table properly
status   : OK
cacti.host_graph
warning  : 3 clients are using or haven't closed the table properly
status   : OK
cacti.host_snmp_cache
warning  : 3 clients are using or haven't closed the table properly
status   : OK
cacti.host_snmp_query
warning  : 3 clients are using or haven't closed the table properly
status   : OK
cacti.host_template                               OK
cacti.host_template_graph                         OK
cacti.host_template_snmp_query                    OK
cacti.plugin_config                               OK
cacti.plugin_db_changes                           OK
cacti.plugin_hooks                                OK
cacti.plugin_realms                               OK
cacti.plugin_thold_contacts                       OK
cacti.plugin_thold_log                            OK
cacti.plugin_thold_template_contact               OK
cacti.plugin_thold_threshold_contact              OK
cacti.poller                                      OK
cacti.poller_command
warning  : 2 clients are using or haven't closed the table properly
status   : OK
cacti.poller_item
warning  : Table is marked as crashed
warning  : 4 clients are using or haven't closed the table properly
error    : Checksum for key:  4 doesn't match checksum for records
warning  : Found 204104 deleted space.   Should be 203016
warning  : Found 1358 deleted blocks       Should be: 1324
error    : Corrupt
cacti.poller_output                               OK
cacti.poller_reindex
warning  : 5 clients are using or haven't closed the table properly
status   : OK
cacti.poller_time                                 OK
cacti.rra                                         OK
cacti.rra_cf                                      OK
cacti.settings
warning  : 9 clients are using or haven't closed the table properly
status   : OK
cacti.settings_graphs                             OK
cacti.settings_tree                               OK
cacti.snmp_query                                  OK
cacti.snmp_query_graph                            OK
cacti.snmp_query_graph_rrd                        OK
cacti.snmp_query_graph_rrd_sv                     OK
cacti.snmp_query_graph_sv                         OK
cacti.thold_data                                  OK
cacti.thold_template                              OK
cacti.user_auth                                   OK
cacti.user_auth_perms                             OK
cacti.user_auth_realm                             OK
cacti.user_log
warning  : 4 clients are using or haven't closed the table properly
status   : OK
cacti.version                                     OK

STEP 2:- Run the below command twice to repair cacti database.
[root@localhost ~]# mysqlcheck -p --auto-repair --databases cacti
Enter password:
cacti.cdef                                        OK
cacti.cdef_items                                  OK
cacti.colors                                      OK
cacti.data_input                                  OK
cacti.data_input_data                             OK
cacti.data_input_fields                           OK
cacti.data_local                                  OK
cacti.data_template                               OK
cacti.data_template_data                          OK
cacti.data_template_data_rra                      OK
cacti.data_template_rrd                           OK
cacti.graph_local                                 OK
cacti.graph_template_input                        OK
cacti.graph_template_input_defs                   OK
cacti.graph_templates                             OK
cacti.graph_templates_gprint                      OK
cacti.graph_templates_graph                       OK
cacti.graph_templates_item                        OK
cacti.graph_tree                                  OK
cacti.graph_tree_items                            OK
cacti.host                                        OK
cacti.host_graph                                  OK
cacti.host_snmp_cache                             OK
cacti.host_snmp_query                             OK
cacti.host_template                               OK
cacti.host_template_graph                         OK
cacti.host_template_snmp_query                    OK
cacti.plugin_config                               OK
cacti.plugin_db_changes                           OK
cacti.plugin_hooks                                OK
cacti.plugin_realms                               OK
cacti.plugin_thold_contacts                       OK
cacti.plugin_thold_log                            OK
cacti.plugin_thold_template_contact               OK
cacti.plugin_thold_threshold_contact              OK
cacti.poller                                      OK
cacti.poller_command                              OK
cacti.poller_item                                 OK
cacti.poller_output                               OK
cacti.poller_reindex                              OK
cacti.poller_time                                 OK
cacti.rra                                         OK
cacti.rra_cf                                      OK
cacti.settings                                    OK
cacti.settings_graphs                             OK
cacti.settings_tree                               OK
cacti.snmp_query                                  OK
cacti.snmp_query_graph                            OK
cacti.snmp_query_graph_rrd                        OK
cacti.snmp_query_graph_rrd_sv                     OK
cacti.snmp_query_graph_sv                         OK
cacti.thold_data                                  OK
cacti.thold_template                              OK
cacti.user_auth                                   OK
cacti.user_auth_perms                             OK
cacti.user_auth_realm                             OK
cacti.user_log                                    OK
cacti.version                                     OK

You get the OK every line that means, all the tables are repaired successfully.

Create Linux partition larger than 2 TB.

You cannot create a Linux partition larger than 2 TB using f disk command. It will not allow you to create a partition that is greater than 2 TB. You can create it using parted utility. Please follow the below steps.

STEP 1:- Find out the current disk size.
[root@localhost ~]# fdisk -l /dev/sdb
Disk /dev/sdb: 4000.0 GB, 3999956729856 bytes
255 heads, 63 sectors/track, 486300 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdb doesn't contain a valid partition table

STEP 2:- Create 4 TB partition.
[root@localhost ~]# parted /dev/sdb
GNU Parted 2.3
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted)

STEP 3:- Create a new GPT partition table.
(parted) mklabel gpt
Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? yes
(parted)

STEP 4:- Set the default unit to TB.
(parted) unit TB

STEP 5:- Create a 4 TB partition.
(parted) mkpart primary 0 0

STEP 6:- Print the current partitions.
(parted) print
Model: ATA ST33000651AS (scsi)
Disk /dev/sdb: 4.00TB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number  Start   End     Size    File system  Name     Flags
1      0.00TB  4.00TB  4.00TB  ext4         primary

STEP 7:- Save changes and exit.
(parted) quit
Information: You may need to update /etc/fstab.

STEP 8:- Format the file system
[root@localhost ~]# mkfs.ext4 /dev/sdb1

STEP 9:- Mount the /dev/sdb1
[root@localhost ~]# mkdir /data
[root@localhost ~]# mount /dev/sdb1 /data
[root@localhost ~]# df -H
Filesystem             Size   Used  Avail Use% Mounted on
/dev/sda1              391G   8.5G   363G   3% /
tmpfs                   51G      0    51G   0% /dev/shm
/dev/sdb1
                       4.0T    211M   3.9T   1% /data

That’s it...

Kill Linux server’s remote session


I am assuming you have two session/terminal of same Linux servers.
STEP 1:- Find out the terminal name. Type the below command on first terminal.
[root@lcoalhost ~]# tty
/dev/pts/0

STEP 2:- Find out the terminal name.  Type the below command on second terminal.
[root@localhost ~]# tty
/dev/pts/1

STEP 3:- Find out the PID of terminal which you want to kill.
[root@localhost ~]# who -all | grep root
root     + pts/0        2012-07-18 17:47   .         27981 (IP_Address)
root     + pts/1        2012-07-18 17:47   .         27999 (IP_Address)

For example, you want to kill remote session /dev/pts/1 then just type the below command from first terminal.
[root@localhost ~]# kill -1 27999

Where 27999 is PID of /dev/pts/1 terminal.

That’s it.

VoipNow Installation failed

I have started the installation of VoipNow Professional in Virtuozzo container on a new virtuozzo infrastructure (Virtuozzo 4.7) when I am getting below error


Error:-

“perl-DBD-MySQL55-4.019-1.rhel5.x86_64 from VoipNow1 has depsolving problems

--> Missing Dependency: MySQL-shared-standard >= 5.5.12 is needed by package perl-DBD-MySQL55-4.019-1.rhel5.x86_64 (VoipNow1)

mysql55-python-1.2.3-0.rhel5.x86_64 from VoipNow1 has depsolving problems

--> Missing Dependency: MySQL-server-standard >= 5.5.12 is needed by package mysql55-python-1.2.3-0.rhel5.x86_64 (VoipNow1)

perl-DBD-MySQL55-4.019-1.rhel5.x86_64 from VoipNow1 has depsolving problems

--> Missing Dependency: MySQL-shared-standard >= 5.5.12 is needed by package perl-DBD-MySQL55-4.019-1.rhel5.x86_64 (VoipNow1)

Error: Missing Dependency: MySQL-shared-standard >= 5.5.12 is needed by package perl-DBD-MySQL55-4.019-1.rhel5.x86_64 (VoipNow1)

Error: Missing Dependency: MySQL-server-standard >= 5.5.12 is needed by package mysql55-python-1.2.3-0.rhel5.x86_64 (VoipNow1)

You could try using --skip-broken to work around the problem

You could try running: package-cleanup --problems

package-cleanup --dupes

rpm -Va --nofiles --nodigest

The program package-cleanup is found in the yum-utils package.

Error: /usr/share/vzyum/bin/yum failed, exitcode”

I have followed below steps to install VoipNow in Virtuozzo Container.

STEP 1:- Create container with CentOS-5.7 (64-bit) on virtuozzo host (Hardware Node) server.

STEP 2:- Download VoipNow ez template from http://voipnow.4psa.com/vz/voipnow2 on virtuozzo host (Hardware Node) server and install it on same server.

[root@localhost ~]# wget http://voipnow.4psa.com/vz/voipnow2/VoipNow-centos-5-x86_64-ez-2.5-1.4PSA.noarch.rpm

[root@localhost ~]# vzpkg install template VoipNow-centos-5-x86_64-ez-2.5-1.4PSA.noarch.rpm

Check it is properly installed or not?

[root@localhost ~]# vzpkg list | grep centos-5-x86_64

centos-5-x86_64 2011-12-31 17:02:23

centos-5-x86_64 VoipNow

STEP 3:- Install VoipNow ez template in created container from virtuozzo host (Hardware Node) server.

[root@localhost ~]# vzpkg install 50 VoipNow

If you’re getting same error so follow the below steps to install VoipNow in Virtuozzo Container.

STEP 1:- Create container with CentOS 4.x/5.x (i386/x86_64), RedHat Enterprise Linux Server 5 (i386/x86_64) and RedHat Enterprise Linux (AS/ES) 4 (i386/x86_64).

STEP 2:- Install yum rpm in this container. For yum/any rpm installation in container follow the below link. (yum rpm is required for command line installer).

Link :- http://linuxrnd.blogspot.com/2011/12/install-yum-in-parallel-virtuozzo-linux.html

STEP 3:- Then download the “command line installer” script in container, start the installation from “command line installer” script and follow the instruction.

[root@localhost ~]# wget http://www.4psa.com/software/voipnowinstaller.sh

[root@localhost ~]# chmod a+x voipnowinstaller.sh

[root@localhost ~]# ./voipnowinstaller.sh