Delete Repository And GPG Key On Ubuntu Systems

This article covers steps to delete the repository and GPG Key On Ubuntu. All packages are signed with a pair of keys consisting of a private key and a public key, by the package maintainer.

A user's private key is kept secret and the public key may be given to anyone the user wants to communicate.

Whenever you add a new repository to your system, you must also add a repository key so that the APT Package Manager trusts the newly added repository.

Once you've added the repository keys, you can make sure you get the packages from the correct source.


To remove Repository keys:

You can remove the repository key if it is no longer needed or if the repository has already been removed from the system.

It can be deleted by entering the full key with quotes as follows (which has a hex value of 40 characters):

$ sudo apt-key del "D320 D0C3 0B02 E64C 5B2B B274 3766 2239 8999 3A70"
OK

Alternatively, you can delete a key by entering only the last 8 characters:

$ sudo apt-key del 89993A70
OK

Once you have removed the repository key, run the apt command to refresh the repository index:

$ sudo apt update

You can verify that the above GPG key has been removed by running the following command:

$ sudo apt-key list

Read More




Event Data getting Stale in Nagios - Resolve it Now

This article covers methods to fix Event Data getting Stale in Nagios. Basically, you will see the causes for event data getting stale in Nagios. There is a known bug relating to event data in versions 2009R1.4B-2011R1.1.

This bug has been patched and will be available in releases later than the versions posted above, but if you're experiencing this error, and/or the nagios service is taking an excessively long time to start, you may have a corrupted mysql table that needs repair.


To fix this Nagios error:

1. Stop the following services:

$ service nagios stop
$ service ndo2db stop
$ service mysqld stop

2. Run the repair script for mysql tables:

/usr/local/nagiosxi/scripts/repairmysql.sh nagios

3. Unzip and copy the the following dbmaint file to /usr/local/nagiosxi/cron/. This will overwrite the previous version.

$ cd /tmp
$ wget http://assets.nagios.com/downloads/nagiosxi/patches/dbmaint.zip
$ unzip dbmaint.zip
$ chmod +x dbmaint.php
$ cp dbmaint.php /usr/local/nagiosxi/cron

Read More




Boost performance of Websites using Cloudflare - Tips to implement it

This article covers how to improve the performance of Websites using Cloudflare. Website speed has a huge impact on user experience, SEO, and conversion rates. Improving website performance is essential for drawing traffic to a website and keeping site visitors engaged. 

Along with the caching and CDN, Cloudflare helps protect your site against brute-force attacks and threats against your website.

Cloudflare has the advantage of serving million of websites and so can identify malicious bots and users more easily than any operating system firewall.


CDNs boost the speed of websites by caching content in multiple locations around the world. CDN caching servers are typically located closer to end users than the host, or origin server. Requests for content go to a CDN server instead of all the way to the hosting server, which may be thousands of miles and across multiple autonomous networks from the user. Using a CDN can result in a massive decrease in page load times.


How to get started on optimizing website performance with Cloudfare CDN (content delivery network)?

1. Optimize images

Images comprise a large percentage of Internet traffic, and they often take the longest to load on a website since image files tend to be larger in size than HTML and CSS files. Luckily, image load time can be reduced via image optimization. Optimizing images typically involves reducing the resolution, compressing the files, and reducing their dimensions, and many image optimizers and image compressors are available for free online.

2. Minify CSS and JavaScript files

Minifying code means removing anything that a computer doesn't need in order to understand and carry out the code, including code comments, whitespace, and unnecessary semicolons. This makes CSS and JavaScript files slightly smaller so that they load faster in the browser and take up less bandwidth.

3. Reduce the number of HTTP requests if possible

Most webpages will require browsers to make multiple HTTP requests for various assets on the page, including images, scripts, and CSS files. In fact many webpages will require dozens of these requests. Each request results in a round trip to and from the server hosting the resource, which can add to the overall load time for a webpage. 

4. Use browser HTTP caching

The browser cache is a temporary storage location where browsers save copies of static files so that they can load recently visited webpages much more quickly, instead of needing to request the same content over and over. Developers can instruct browsers to cache elements of a webpage that will not change often. Instructions for browser caching go in the headers of HTTP responses from the hosting server.

5. Minimize the inclusion of external scripts

Any scripted webpage elements that are loaded from somewhere else, such as external commenting systems, CTA buttons, or lead-generation popups, need to be loaded each time a page loads.

6. Don't use redirects, if possible

A redirect is when visitors to one webpage get forwarded to a different page instead. Redirects add a few fractions of a second, or sometimes even whole seconds, to page load time

Read More




Amazon Redshift - Its features and how to set it up

This article covers an effective method to set up Amazon Redshift. Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. This enables you to use your data to acquire new insights for your business and customers. The first step to create a data warehouse is to launch a set of nodes, called an Amazon Redshift cluster.

Amazon Redshift is a relational database management system (RDBMS), so it is compatible with other RDBMS applications. Amazon Redshift and PostgreSQL have a number of very important differences that you need to take into account as you design and develop your data warehouse applications.

Amazon Redshift is based on PostgreSQL.

Amazon Redshift is specifically designed for online analytic processing (OLAP) and business intelligence (BI) applications, which require complex queries against large datasets.


What is the difference between Amazon Redshift and Amazon Redshift Spectrum and Amazon Aurora?

Amazon Simple Storage Service (Amazon S3) is a service for storing objects, and Amazon Redshift Spectrum enables you to run Amazon Redshift SQL queries against exabytes of data in Amazon S3.

Both Amazon Redshift and Amazon RDS enable you to run traditional relational databases in the cloud while offloading database administration. 

Customers use Amazon RDS databases primarily for online-transaction processing (OLTP) workload while Redshift is used primarily for reporting and analytics.

Read More




Best Ubuntu APT Repository Mirror - How to get it

This article covers methods to find the best APT mirror on the Ubuntu server. 


To Find Best Ubuntu APT Repository Mirror Using Apt-smart:

Apt-smart is yet another command line tool written in Python. It helps you to find APT mirrors that provides best download rate for your location. It can smartly retrieve the mirrors by querying the Debian mirror list, Ubuntu mirror list and Linux mint mirror list and choose best mirror based on the country in which the user lives in. The discovered mirrors are ranked by bandwidth and their status (like up-to-date, 3-hours-behind, one-week-behind etc).

Another notable feature of Apt-smart is it will automatically switch to any other different mirrors when the current mirror is being updated. The new mirrors can be selected either automatically or manually by the user. Good thing is Apt-smart will backup the current sources.list before updating it with new mirrors.


To Install Apt-smart in Ubuntu:

Make sure you have installed Pip and run the following commands one by one to install Apt-smart:

$ pip3 install --user apt-smart
$ echo "export PATH=\$(python3 -c 'import site; print(site.USER_BASE + \"/bin\")'):\$PATH" >> ~/.bashrc
$ source ~/.bashrc


To List all mirrors based on rank:

To list all available ranked mirrors in the terminal, run:

$ apt-smart --list-mirrors

Or,

$ apt-smart -l


To Automatically update mirrors:

Instead of manually finding and updating the best mirror in Ubuntu, you can let Apt-smart to choose a best Apt mirror and automatically update the sources.list with new one like below:

$ apt-smart --auto-change-mirror

To get help, run:

$ apt-smart --help

Read More




DirectAdmin User too large delete on background - Methods to resolve this error

This article covers method to fix the error, DirectAdmin: User too large delete on background. Basically, this error occurs when the sum of the disk usage of any user exceeds a certain threshold.

To prevent time-outs in your browser when deleting excessively large accounts, DirectAdmin will execute the deletion by adding the command to the background’s task.queue, instead of performing the execution on the foreground.


To fix DirectAdmin: User too large delete on background error, you can connect to the server through SSH using root access, then go to DirectAdmin's installed directory as below:

cd /usr/local/directadmin/conf/

Then edit the directadmin.conf file in the directory by running

vi directadmin.conf

If the variable "get_background_delete_size" value exists in the directadmin.conf file, it will be set to 10 GB by default (get_background_delete_size=10240). 

If the variable cannot be found in the file, simply add it in. 

You can modify the value of 10240 to define the value that you wish to set.

Read More





Focus on your business, not your servers.

Click Here to Learn More




Recent Post