Jump to content

About This Club

This club is for anyone who loves Linux. There is no cost to you. We wanted to have a club to focus on in's and out's of Linux to help others.
  1. What's new in this club
  2. We leverage a linux box (RHEL 7) to store a month worth of F5 ucs files from all 70 of our F5's in the event we need to restore from scratch. Each file averages around 700M so this can fill up rather quickly if you don't do some house cleaning and remove old files. This is where leveraging crontab can help. Here are some examples of house cleaning in the directory where we save our files at which is /home/confback/backups/f5/ ++++++++++++++++++++++++++++++++++++++++++++++++++++++ sudo vi /etc/crontab ## This command leaves seven days of .ucs files on the server 0 0 * * * /usr/bin/find /home/confback/backups/f5/ -name "*.ucs" -type f -mtime +30 -exec rm -f {} \; ## This command leaves seven days of .ucs files on the server 0 0 * * * /usr/bin/find /home/confback/backups/f5/ -name "*.ucs" -type f -mtime +7 -exec rm -f {} \; ## This command leaves three days of all files on the server 0 0 * * * root /usr/bin/find /home/confback/backups/f5/ -type f -mtime +1 -exec rm {} + ## This command leaves only today files on the server 5 0 * * * root /usr/bin/find /home/confback/backups/f5/ -type f -mmin -1440 -delete; ## Also had this command which is VERY similar to the first command with slight difference on how the command ends. 0 0 * * * root /usr/bin/find /home/confback/backups/f5/ -type f -mtime +1 -exec rm -rf {} \; ++++++++++++++++++++++++++++++++++++++++++++++++++++++
  3. Configuring Nagios to monitor devices It's easier to just jump into the parent directory of nagios to do any configuration /usr/local/nagios/etc/ Here you will find the basics cgi.cfg htpasswd.users nagios.cfg objects resource.cfg I personally create two folders to monitor what I'm responsible for f5 ib These folders don't mean anything if you didn't define them in /usr/local/nagios/etc/nagios.cfg # You can also tell Nagios to process all config files (with a .cfg # extension) in a particular directory by using the cfg_dir # directive as shown below: #cfg_dir=/usr/local/nagios/etc/servers #cfg_dir=/usr/local/nagios/etc/printers #cfg_dir=/usr/local/nagios/etc/switches #cfg_dir=/usr/local/nagios/etc/routers cfg_dir=/usr/local/nagios/etc/f5 cfg_dir=/usr/local/nagios/etc/ib Let's start easy.. under the /usr/local/nagios/etc/f5 create a cfg file. In my example here I'm creating one called tst1.cfg and it looks like this define host{ use linux-server host_name usdet2slbcov01-ch.atic.mwg.com alias usdet2slbcov01-ch address 10.11.38.20 hostgroups LinuxServers } Now if that's all you do and you think its going to work and run the command to restart nagios systemctl restart nagios you will get an error website will say "Unable to get process status" Looking at journalctl -xe Oct 03 12:42:02 usdet1lvdwb003.ally.corp nagios[1603]: Reading configuration data... Oct 03 12:42:02 usdet1lvdwb003.ally.corp nagios[1603]: Read main config file okay... Oct 03 12:42:02 usdet1lvdwb003.ally.corp nagios[1603]: Error: Could not find any hostgroup matching 'LinuxServers' (config file Oct 03 12:42:02 usdet1lvdwb003.ally.corp nagios[1603]: Error: Failed to process hostgroup names for host 'usdet2slbcov01-ch.atic Oct 03 12:42:02 usdet1lvdwb003.ally.corp nagios[1603]: Error processing object config files! Oct 03 12:42:02 usdet1lvdwb003.ally.corp nagios[1603]: ***> One or more problems was encountered while processing the config fil Oct 03 12:42:02 usdet1lvdwb003.ally.corp nagios[1603]: Check your configuration file(s) to ensure that they contain valid So it's clear that we have a host_name (usdet2slbcov01-ch.atic.mwg.com) that is supposedly part of a hostgroup (LinuxServers) per the tst1.cfg file we just created. What we didn't do is define the hostgroups (LinuxServers) anywhere. Open up tst1.cfg and add at the beginning of the file ############################################################################### # # HOST GROUP DEFINITIONS # ############################################################################### # Create a new hostgroup for F5 define hostgroup { hostgroup_name f5 ; The name of the hostgroup alias F5 Devices ; Long name of the group }
  4. Here are my notes when I deployed Nagios on a RHEL 7 box SELinux has to be disabled or in permissive mode. Steps to do this are as follows. sed -i 's/SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config setenforce 0 Install the Pre-Req's yum install -y gcc glibc glibc-common wget unzip httpd php gd gd-devel perl postfix yum install openssl-devel Download Nagios cd /tmp wget -O nagioscore.tar.gz https://github.com/NagiosEnterprises/nagioscore/archive/nagios-4.4.7.tar.gz tar xzf nagioscore.tar.gz Compile files to install cd /tmp/nagioscore-nagios-4.4.7/ ./configure make all Create User and Group make install-groups-users usermod -a -G nagios apache Install Binaries make install Install Service / Daemon make install-daemoninit systemctl enable httpd.service Install Command Mode make install-commandmode Install Sample Config Files make install-config Install Apache Files make install-webconf Configure Firewall firewall-cmd --zone=public --add-port=80/tcp firewall-cmd --zone=public --add-port=80/tcp --permanent Create nagios admin account htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin When adding additional users in the future, you need to remove -c from the above command otherwise it will replace the existing nagiosadmin user (and any other users you may have added). Start Apache Web Service systemctl start httpd.service Start Nagios Service / Damon systemctl start nagios.service Test Nagios http://<IPaddress OR FQDN>/nagios Install Necessary Plugins yum install -y gcc glibc glibc-common make gettext automake autoconf wget openssl-devel net-snmp net-snmp-utils epel-release yum install -y perl-Net-SNMP if the above doesn't work try this on the RHEL 7 instance cd /tmp wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm rpm -ihv epel-release-latest-7.noarch.rpm subscription-manager repos --enable=rhel-7-server-optional-rpms yum install -y gcc glibc glibc-common make gettext automake autoconf wget openssl-devel net-snmp net-snmp-utils yum install -y perl-Net-SNMP
  5. LETS STARTED SEP BY STEP LibreNMS is becoming one of my favourite monitoring tools. Setup and getting started is easy and it has enough advanced options and tuneable. I recently discovered that LibreNMS is able to check services as well. Services, in this context, means, executing Nagios plugins (like check http, check ping, etc). This allows you to check services that SNMP does not cover by default, like HTTP(s) health checks, certificate expiry, tcp port checks (e.g. rdp) and anything for which you can write a Nagios plugin yourself. The performance data, if available, is graphed automatically. Alerting is done with the regular LibreNMS alerts. This guide covers the setup of services (it's not enabled by default) and a few basic checks, like an http health check, certificate expiry and SSH monitoring. Nagios check plugins For those unfamiliar with Nagios, it is a monitoring system which can execute checks. These checks are scripts and program’s which take input (for example, which host to check, thresholds), do a check and then return an exit code and some performance data. The plugins can be in any language, Nagios only cares about the exit codes. They can be the following: 0: OK 1: WARNING 2: CRITICAL For example, to check if a website is working, you would use the check_http plugin. This plugin checks if the site returns a 200 OK and if so, gives exit status 0. If not, for example because of a timeout, access denied or 50x error, it will return status 1 or 2. Nagios then can do all kinds of alerting based on those statuses. Performance data is comma separated value data added after the status output in the command result. This can be anything, for example, the time the HTTP request took. Since you can write these scripts yourself any monitoring system that uses these plugins is very extensible. It can check anything you want as long as you can write a script for it. This makes the monitoring tool very powerful, you're not limited to what they provide. Step1: Enabling service checks Service checks are not enabled by default in LibreNMS. The documentation explains how to enable the module. In this guide I assume your path is /opt/librenms/. Edit your config file: sudo nano /opt/librenms/config.php Add the following line $config['show_services'] = 1; Service Auto Discovery To automatically create services for devices with available checks. You need to enable the discover services within nano /opt/librenms/config.php with the following: $config['discover_services'] = true; Service Templates Auto Discovery To automatically create services for devices with configured Service Templates. You need to enable the discover services within nano /opt/librenms/config.php with the following: $config['discover_services_templates'] = true; Save it Ctrl +S Debian/Ubuntu: $config['nagios_plugins'] = "/usr/lib/nagios/plugins"; Centos: $config['nagios_plugins'] = "/usr/lib64/nagios/plugins"; Save the file. Step2: Setup Service checks are now distributable if you run a distributed setup. To leverage this, use the dispatch service. Alternatively, you could also replace check-services.php with services-wrapper.py in cron instead to run across all polling nodes. If you need to debug the output of services-wrapper.py then you can add -d to the end of the command - it is NOT recommended to do this in cron. Firstly, install Nagios plugins. Debian / Ubuntu: sudo apt install monitoring-plugins Centos: yum install nagios-plugins-all Make sure the Nagios plugins are installed: apt-get install nagios-plugins nagios-plugins-extra This will point LibreNMS at the location of the nagios plugins - please ensure that any plugins you use are set to executable. For example: Debian/Ubuntu: chmod +x /usr/lib/nagios/plugins/* Centos: chmod +x /usr/lib64/nagios/plugins/* Edit the LibreNMS cronjob to include service checks: Sudo nano /etc/cron.d/librenms Add: */5 * * * * librenms /opt/librenms/services-wrapper.py 1 Step3:Debug Change user to librenms for example su - librenms then you can run the following command to help troubleshoot services. ./check-services.php -d Performance data after test to see if the plugins work su – librenms ./check-services.php -d -- snip -- Nagios Service - 26 Request: /usr/lib/nagios/plugins/check_icmp localhost Perf Data - DS: rta, Value: 0.016, UOM: ms Perf Data - DS: pl, Value: 0, UOM: % Perf Data - DS: rtmax, Value: 0.044, UOM: ms Perf Data - DS: rtmin, Value: 0.009, UOM: ms Response: OK - localhost: rta 0.016ms, lost 0% Service DS: { "rta": "ms", "pl": "%", "rtmax": "ms", "rtmin": "ms" } OK u:0.00 s:0.00 r:40.67 RRD[update /opt/librenms/rrd/localhost/services-26.rrd N:0.016:0:0.044:0.009] -- snip -- Do a test to see if the plugins work: /usr/lib/nagios/plugins/check_http -H google.com -S -p 443 Example output: HTTP OK: HTTP/1.1 200 OK - 1320 bytes in 0.199 second response time |time=0.198748s;;;0.000000 size=1320B;;;0 or /usr/lib/nagios/plugins/check_icmp 8.8.8.8 Step4:Alerting Services uses the Nagios Alerting scheme where exit code: 0 = Ok, 1 = Warning, 2 = Critical, To create an alerting rule to alert on service=critical, your alerting rule would look like: %services.service_status = "2" There is a default alert rule in LibreNMS named Service up/down: services.service_status != 0 AND macros.device_up = 1 If you want to differentiate between WARNING and CRITICAL Nagios alerts, you can create two rules: # warning services.service_status = 1 AND macros.device_up = 1 # critical services.service_status = 2 AND macros.device_up = 1 Step5:Related Polling / Discovery Options These settings are related and should be investigated and set accordingly. The below values are not defaults or recommended. $config['service_poller_enabled'] = true; $config['service_poller_workers'] = 24; $config['service_poller_frequency'] = 300; $config['service_poller_down_retry'] = 5; $config['service_discovery_enabled'] = true; $config['service_discovery_workers'] = 16; $config['service_discovery_frequency'] = 3600; $config['service_services_enabled'] = true; $config['service_services_workers'] = 16; $config['service_services_frequency'] = 60; Step6:Service checks polling logic Service check is skipped when the associated device is not pingable, and an appropriate entry is populated in the event log. Service check is polled if it's IP address parameter is not equal to associated device's IP address, even when the associated device is not pingable. To override the default logic and always poll service checks, you can disable ICMP testing for any device by switching Disable ICMP Test setting (Edit -> Misc) to ON. Service checks will never be polled on disabled devices. Adding a dummy host for testing You must have a host in LibreNMS to be able to add service checks. Normally you would use snmp to monitor devices, but if you just want to do simple (HTTP) checks without SNMP you can add a host without SNMP or TCP checks. Via Devices, Add Device you can enter an URL/IP. Uncheck the SNMP checkbox and check the Force add button: If this device does not accept ICMP (ping) traffic, you can disable that as well. Go to the device, select the Cog menu, Edit, "Misc" tab, then check "Disable ICMP Test?": If you do want to use SNMP, here is a quick guide for Ubuntu. First install snmpd: apt-get install snmpd Edit the configuration. Remove everything and add the following: agentAddress udp:161 createUser <username> SHA "<password>" AES "<password>" view systemonly included .1.3.6.1.2.1.1 view systemonly included .1.3.6.1.2.1.25.1 rwuser <username> sysLocation <location> sysContact <your name and email> includeAllDisks 10% defaultMonitors yes linkUpDownNotifications yes Change username and password to a long and secure name and password (8 characters minimum). Restart snmpd: service snmpd restart Add a rule in your firewall to only allow access to UDP port 161 from your monitoring service and deny all other traffic. You can now add this machine in LibreNMS using SNMPv3 and the authentication data you provided. Configuring services in LibreNMS In LibreNMS you should now have a new tab button in the top menu, named "Services": Make sure you added a host as described above. You can navigate to a host and click the "Services" tab, then click "Add service". In the top menu bar you can also click "Services", "Add Service". You then have to select the host as well. The type is the nagios plugin you want to use. In our case, http (the check_ part is not shown). Enter a meaningfull description. For example, "HTTP Check https://example.org/path/to/data". The IP address can be the hostname or the IP. It is recommended to make this the same as the host the services are coupled to. The "Parameters" are the Nagios check command parameters, from the shell. In the case of an HTTP check for one of the servers hosting google.com it would be: -E -I 192.168.88.6 -S -p 443 -u "/index.html" IP Address: 192.168.88.6 -E: extended performance data -I 192.168.88.6: the specifc IP address (optional, I have multiple A records) -S: use SSL -p 56: use port 443 -u "/index.html": the URL to request. (optional) All parameters can be found on the monitoring-plugins website. You can test on the shell first before you add the check to LibreNMS. Save the dialog box and wait a few minutes for the check to run. An SSH check is even simpler, just select SSH as the type and add the check. Here is an example of a Cisco switch where SSH is checked: A certificate check, to get an alert when a certificate is about to expire, can also be done. The type is http and the parameters are: --sni -S -p 443 -C 30 It will check if the certificate expires within 30 days. Limits Specific alerting and rechecking when a check fails is not as configurable in Icinga or Nagios. The check will run, and alert you on a failure. Icinga/Nagios allow you to configure escalation paths and advanced re-checking. For example, when a check fails, recheck it 4 times with an interval of X seconds (instead of the regular check interval) and only alert if it still fails. In Icinga you can define (service or host) groups and apply service checks to these groups. LibreNMS doesn't allow this, so you cannot define a check and apply it to a group. If you need to check 100 servers, it means defining 100 checks by hand per server. Here is an example of a dummy host (no ICMP or SNMP) with a HTTP check and alerting enabled: https://www.monitoring-plugins.org/doc/man/check_http.html https://docs.librenms.org/Extensions/Services/
  6. This is a great solution for monitoring the devices in your network. I personally did the docker image deployment and regret it and will be working on just deploying an updated OVA image provided by LibreNMS community which can be found here (issue is its Ubuntu and not CentOS). Below is my journey using the provided image on a VMware ESX host. How do I migrate my LibreNMS install to another server? If you are moving from one CPU architecture to another then you will need to dump the rrd files and re-create them. If you are in this scenario then you can use Dan Brown's migration scripts. If you are just moving to another server with the same CPU architecture then the following steps should be all that's needed: Install LibreNMS as per our normal documentation; you don't need to run through the web installer or building the sql schema. Stop cron by commenting out all lines in /etc/cron.d/librenms Dump the MySQL database librenms from your old server (mysqldump librenms -u root -p > librenms.sql)... and import it into your new server (mysql -u root -p librenms < librenms.sql). Copy the rrd/ folder to the new server. Copy the .env and config.php files to the new server. Check for modified files (eg specific os, ...) with git status and migrate them. Ensure ownership of the copied files and folders (substitute your user if necessary) - chown -R librenms:librenms /opt/librenms Delete old pollers on the GUI (gear icon --> Pollers --> Pollers) Validate your installation (/opt/librenms/validate.php) Re-enable cron by uncommenting all lines in /etc/cron.d/librenms This is suppose to be all that is necessary to migrate.
  7. Many cases you spend a lot of time and energy creating files to have someone trash the system and you spend what feels like forever trying to get your files back. Well an easy solution to setup is using another Linux server so your files are always in two places. We are going to use the very powerful rsync tool to sync a directory and all subdirectories and files from an Active server to a server that we will using as our Backup server. ACTIVE Server BACKUP Server IP: 1.1.1.1 IP: 2.2.2.2 Directory to backup: /apps Utilize SSH key for authentication between both servers Create ssh key on Backup server by running: ssh-keygen -t rsa -b 2048 (If asked for a password leave blank) Copy the Private and Public created keys to Active server: ssh-copy-id -i /root/.ssh/id_rsa.pub root@1.1.1.1 Install rsync on both servers sudo yum install rsync -y sudo dnf install rsync -y From Backup Server, do an Initial rsync from Active box without using a password: rsync -avzhe ssh root@1.1.1.1:/apps / Setup Cron to do this every 5min From Backup Server run: crontab -e Create a new line and paste the successful rsync command: * * * * * rsync -avzhe ssh root@1.1.1.1:/apps / (this will test every minute and once successful we'll update it) Validate / Test Here I typically add a test.txt file to one of the directories on the Active Server and see if it shows up on the Backup server.. If so, success.. If not, time to troubleshoot step by step
  8. When you maintain a linux based operating system that is locked down (so you can not install helpful tools to help you track events) you have to get creative so I created this script (its real ugly but works) that looks at every user in the /etc/passwd file and checks bash_history for commands and in this case I also check what commands on the F5 they may have run. It may be helpful for you or not.. it doesn't cost you anything (unless you want to donate a tasty beverage to me for all my hard work) Create a script (example auditcmds.sh) #!/bin/bash #### Created By: Dennis Hosang #### Script gathers audit information for users #### Version 1.0 2022.04.14 clear outfile=/var/tmp/DJ/audit/auditcmdsOutput_$(date +%Y%m%d).txt echo "User audit on $HOSTNAME"."$(date +%Y%m%d)" > $outfile while IFS=: read -r f1 f2 f3 f4 f5 f6 f7 do echo "........ start $f1 ($f5) ........" >> $outfile echo "User $f1 use $f7 shell and stores files in $f6 directory." >> $outfile echo "***** User $f1 tmsh-history *****" >> $outfile cat /home/$f1/.tmsh-history-$f1 >> $outfile echo "***** User $f1 bash_history *****" >> $outfile cat /home/$f1/.bash_history >> $outfile echo "........ done $f1 ($f5) ........" >> $outfile echo " " >> $outfile done < /etc/passwd It's pretty self explanatory but once you create this file you can simply run bash auditcmds.sh
  9. Turns out the dnf has changed the way it deals with proxies. If you’re using a basic proxy authentication then you need to specify it: vi /etc/dnf/dnf.conf # proxy settings proxy=http://proxy.domain.com:3128/ proxy_username=username proxy_password=password proxy_auth_method=basic
  10. One of the applications I use to manage my linux/F5 boxes is iTerm2 (even though ZOC8 is my favorite but at work they want us to use iTerm2 because of costs reasons) When you want to transfer files in iTerm2 you have to hold option key and click and drag the file to the SSH window which will automatically SCP the file up if you installed the shell integration which in short adds a command to your ~/.bash_profile and in every instance I have ever had I have to follow up with a line prior to that in order to set the hostname and so the last couple lines in the bash_profile looks like this export iterm2_hostname=foo.example.com test -e ~/.iterm2_shell_integration.bash && source ~/.iterm2_shell_integration.bash || true
  11. Let's touch on a few basics of docker How to Copy a File from docker image to host Identify Container ID [root@usdet1lvdwb002 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3feed178f975 portainer/portainer-ce "/portainer" 4 months ago Up 3 hours 8000/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp portainer 8fc045e68c72 librenms/librenms:latest "/init" 4 months ago Up 3 hours 0.0.0.0:514->514/tcp, 0.0.0.0:514->514/udp, :::514->514/tcp, :::514->514/udp, 8000/tcp librenms_syslogng 5f2591912625 librenms/librenms:latest "/init" 4 months ago Up 3 hours 514/tcp, 8000/tcp, 514/udp librenms_dispatcher 768e90e4a2c8 librenms/librenms:latest "/init" 4 months ago Up 3 hours 514/tcp, 514/udp, 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp librenms 2926209e5c1d eff629089685 "docker-entrypoint.s…" 4 months ago Up 3 hours 3306/tcp librenms_db 51a9b6406a90 crazymax/msmtpd:latest "/init" 4 months ago Up 3 hours (healthy) 2500/tcp librenms_msmtpd e17acb99488a memcached:alpine "docker-entrypoint.s…" 4 months ago Up 3 hours 11211/tcp librenms_memcached 896721f9b451 redis:5.0-alpine "docker-entrypoint.s…" 4 months ago Up 3 hours 6379/tcp librenms_redis Copy file (librenms.sql) and this case its on the db container... you only need to use part of the container ID docker cp 2926209:/librenms.sql It's that easy How do I SSH into a running container There is a docker exec command that can be used to connect to a container that is already running. Use docker ps to get the name of the existing container Use the command docker exec -it <container name> /bin/bash to get a bash shell in the container Generically, use docker exec -it <container name> <command> to execute whatever command you specify in the container. How do I run a command in my container? The proper way to run a command in a container is: docker-compose run <container name> <command>. For example, to get a shell into your web container you might run docker-compose run web /bin/bash To run a series of commands, you must wrap them in a single command using a shell. For example: docker-compose run <name in yml> sh -c '<command 1> && <command 2> && <command 3>' In some cases you may want to run a container that is not defined by a docker-compose.yml file, for example to test a new container configuration. Use docker run to start a new container with a given image: docker run -it <image name> <command> The docker run command accepts command line options to specify volume mounts, environment variables, the working directory, and more.
  12. I was messing around with a nice monitoring tool called LibreNMS and all was cool until I started logging syslog data and over the weekend I come back to LibreNMS not running and I couldn't figure out why. I then noticed docker was down and so I run my command to look at the drive space and saw the issue. Only 20k of space left on drive [root@usdet1lvdwb002 /]# df -h / Filesystem Size Used Avail Use% Mounted on /dev/mapper/cl-root 744G 744G 20K 100% / I tried to check for large files and the command won't even run since there is no space available on the drive even for a tmp file [root@usdet1lvdwb002 /]# find -type f -exec du -Sh {} + | sort -rh | head -n 5 du: cannot access './proc/1998/task/1998/fdinfo/5': No such file or directory du: cannot access './proc/1998/task/1998/fdinfo/10': No such file or directory du: cannot access './proc/1998/fdinfo/5': No such file or directory sort: cannot create temporary file in '/tmp': No space left on device Luckily I'm using a Virtual CentOS 8 image on VMware ESX so I can beg with the team to provide me with more HDD space after they just move me from 500GB to 750Gb. I was supplied with the move from 750G to 1TB so now let me work on applying it [root@usdet1lvdwb002 ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 12G 0 12G 0% /dev tmpfs 12G 0 12G 0% /dev/shm tmpfs 12G 9.9M 12G 1% /run tmpfs 12G 0 12G 0% /sys/fs/cgroup /dev/mapper/cl-root 744G 580G 164G 78% / /dev/sda2 1014M 234M 781M 24% /boot /dev/sda1 599M 7.3M 592M 2% /boot/efi tmpfs 2.4G 0 2.4G 0% /run/user/0 tmpfs 2.4G 4.0K 2.4G 1% /run/user/968 overlay 744G 580G 164G 78% /var/lib/docker/overlay2/ea9a25ca518754c9aea7124bd12fc8e508f00ea06994e04029f4cab0fef7b883/merged overlay 744G 580G 164G 78% /var/lib/docker/overlay2/0932e12a1b4200adb37f65c590de10fa7c873f8b85d2fe67be716f8a159931d1/merged overlay 744G 580G 164G 78% /var/lib/docker/overlay2/b247fd04b3b4081c7137c04e56df42c9be8b26b668d63ac76df7a8213ff19f03/merged overlay 744G 580G 164G 78% /var/lib/docker/overlay2/19f7bd2a1af369b2cde32a82646d8b24fb96d3f3d10ca7109f545397bf7af2ac/merged overlay 744G 580G 164G 78% /var/lib/docker/overlay2/7fdb2594ee6cd21e5913397745a8258b2b323ff153905fc142eb4532bc1eb65a/merged overlay 744G 580G 164G 78% /var/lib/docker/overlay2/d6d33f797eba15afd9757f2b464bb77a03e13bb04f3ea8e5200519aa989a2456/merged overlay 744G 580G 164G 78% /var/lib/docker/overlay2/cd0efaa3c4ca6f4306707b24a9835d921f097d528c24b46016d08d4acdac2405/merged overlay 744G 580G 164G 78% /var/lib/docker/overlay2/7d565dc86f507066f612310cf1d0ec96e2f8a63b298f2008a3de17be73bbe13f/merged .... [root@usdet1lvdwb002 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 1T 0 disk ├─sda1 8:1 0 600M 0 part /boot/efi ├─sda2 8:2 0 1G 0 part /boot └─sda3 8:3 0 748.4G 0 part ├─cl-root 253:0 0 743.4G 0 lvm / └─cl-swap 253:1 0 5G 0 lvm [SWAP] sr0 11:0 1 1024M 0 rom .... [root@usdet1lvdwb002 ~]# pvdisplay --- Physical volume --- PV Name /dev/sda3 VG Name cl PV Size 748.41 GiB / not usable 1.98 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 191593 Free PE 0 Allocated PE 191593 PV UUID xMlTP5-qVmX-LJXM-QVXf-3xRk-2W3a-11yKk0 .... [root@usdet1lvdwb002 ~]# fdisk /dev/sda Welcome to fdisk (util-linux 2.32.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. GPT PMBR size mismatch (1572863999 != 2147483647) will be corrected by write. The backup GPT table is not on the end of the device. This problem will be corrected by write. Command (m for help): p Disk /dev/sda: 1 TiB, 1099511627776 bytes, 2147483648 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 532953BD-AC98-4EF9-9C0D-BBDDB871EB4C Device Start End Sectors Size Type /dev/sda1 2048 1230847 1228800 600M EFI System /dev/sda2 1230848 3327999 2097152 1G Linux filesystem /dev/sda3 3328000 1572863966 1569535967 748.4G Linux filesystem Command (m for help): d Partition number (1-3, default 3): 3 Partition 3 has been deleted. Command (m for help): n Partition number (3-128, default 3): 3 First sector (3328000-2147483614, default 3328000): Last sector, +sectors or +size{K,M,G,T,P} (3328000-2147483614, default 2147483614): Created a new partition 3 of type 'Linux filesystem' and of size 1022.4 GiB. Partition #3 contains a LVM2_member signature. Do you want to remove the signature? [Y]es/[N]o: n Command (m for help): t Partition number (1-3, default 3): 3 Partition type (type L to list all types): 8e Type of partition 3 is unchanged: Linux filesystem. Command (m for help): p Disk /dev/sda: 1 TiB, 1099511627776 bytes, 2147483648 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 532953BD-AC98-4EF9-9C0D-BBDDB871EB4C Device Start End Sectors Size Type /dev/sda1 2048 1230847 1228800 600M EFI System /dev/sda2 1230848 3327999 2097152 1G Linux filesystem /dev/sda3 3328000 2147483614 2144155615 1022.4G Linux filesystem Command (m for help): w The partition table has been altered. Syncing disks. .... [root@usdet1lvdwb002 ~]# partx -u /dev/sda .... [root@usdet1lvdwb002 ~]# pvresize /dev/sda3 Physical volume "/dev/sda3" changed 1 physical volume(s) resized or updated / 0 physical volume(s) not resized ... [root@usdet1lvdwb002 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 1T 0 disk ├─sda1 8:1 0 600M 0 part /boot/efi ├─sda2 8:2 0 1G 0 part /boot └─sda3 8:3 0 1022.4G 0 part ├─cl-root 253:0 0 743.4G 0 lvm / └─cl-swap 253:1 0 5G 0 lvm [SWAP] sr0 11:0 1 1024M 0 rom ... [root@usdet1lvdwb002 ~]# pvs PV VG Fmt Attr PSize PFree /dev/sda3 cl lvm2 a-- 1022.41g 274.00g ... [root@usdet1lvdwb002 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert root cl -wi-ao---- 743.41g swap cl -wi-ao---- 5.00g .... [root@usdet1lvdwb002 ~]# lvextend -r cl/root /dev/sda3 Size of logical volume cl/root changed from 743.41 GiB (190313 extents) to 1017.41 GiB (260457 extents). Logical volume cl/root successfully resized. meta-data=/dev/mapper/cl-root isize=512 agcount=69, agsize=2844928 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=194880512, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=5556, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 194880512 to 266707968 Thats it folks.. pretty easy
  13. Here are my notes on setting up KVM on a fresh install of CentOS 8.3 on a VMWare ESX host SETTINGS Ram: 8G Disk: 50G USER SETTINGS root / SUPERMAN! NETWORK SETTINGS (2 interfaces required) IPADDR="10.11.24.20" PREFIX="23" GATEWAY="10.11.24.1" DNS1="10.11.26.11" DNS2="10.11.27.11" DOMAIN="atic.eventguyz.com eventguyz.com" As you go through the install choose: Server with GUI Container Management Development Tools Graphical Administration Tools Headless Management System Tools 1. Start with a clean install of CentOS 8.3 2. Add proxy to /etc/yum.conf and /etc/dnf/dnf.conf 3. CPU Support for Intel VT or AMD-V: cat /proc/cpuinfo | egrep "vmx|svm" OR RUN lscpu | grep Virtualization 4. sudo yum update CHECK if its installed and running: sudo systemctl status libvirtd 5. sudo yum install @virt 6. VERIFY KERNEL MODS ARE LOADED: lsmod | grep kvm 7. TOOLS FOR MGMT: sudo dnf -y install virt-top libguestfs-tools 8. START KVM DAEMON: sudo systemctl enable --now libvirtd 9. INSTALL VIRT-MGR: sudo yum -y install virt-manager 10. CREATE NETWORK BRIDGE: sudo nmcli connection show CREATE BRIDGE ON 2nd INTERFACE nmcli connection show nmcli connection delete e4014630-448b-5ad3-4992-f4678202147c nmcli connection add type bridge autoconnect yes con-name br0 ifname br0 nmcli connection modify br0 ipv4.addresses 10.6.0.136/27 ipv4.method manual nmcli connection modify br0 ipv4.gateway 10.6.0.129 nmcli connection modify br0 ipv4.dns 10.11.26.11 +ipv4.dns 10.11.27.11 nmcli connection delete ens224 nmcli connection add type bridge-slave autoconnect yes con-name ens224 ifname ens224 master br0 nmcli connection show nmcli connection up br0 nmcli connection show br0 ip addr VERIFY KVM INSTALLED lsmod | grep kvm HELPFUL TOOLS dnf -y install virt-top libguestfs-tools START/ENABLE KVM systemctl enable --now libvirtd MANAGE KVM VIRTUALS IN GUI yum -y install virt-manager dnf install virt-install virt-viewer libguestfs-tools systemctl enable libvirtd.service systemctl start libvirtd.service systemctl status libvirtd.service ip r BEFORE STP=yes BRIDGING_OPTS=priority=32768 TYPE=Bridge PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=none DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=br0 UUID=b5a9dc97-ebd9-43aa-ba1f-88b308663a02 DEVICE=br0 ONBOOT=yes IPADDR=10.6.0.136 PREFIX=27 GATEWAY=10.6.0.129 DNS1=10.11.26.11 DNS2=10.11.27.11 AFTER STP=no BRIDGING_OPTS=priority=32768 TYPE=Bridge PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=none DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=br0 UUID=b5a9dc97-ebd9-43aa-ba1f-88b308663a02 DEVICE=br0 ONBOOT=yes IPADDR=10.6.0.136 PREFIX=27 GATEWAY=10.6.0.129 DNS1=10.11.26.11 DNS2=10.11.27.11 IPV6_DISABLED=yes systemctl restart NetworkManager.service VERIFY nmcli device SET PROXY vi /etc/wgetrc use_proxy=yes https_proxy = http://10.47.196.156:80 http_proxy = http://10.47.196.156:80 ftp_proxy = http://10.47.196.156:80 CREATE TEST INSTANCE cd /var/lib/libvirt/boot/ wget --no-check-certificate https://mirrors.edge.kernel.org/centos/8/isos/x86_64/CentOS-8.3.2011-x86_64-boot.iso wget https://mirrors.edge.kernel.org/centos/8/isos/x86_64/CHECKSUM sha256sum --ignore-missing -c CHECKSUM virt-install \ --virt-type=kvm \ --name centos8-vm \ --memory 1024 \ --vcpus=1 \ --os-variant=rhel8.1 \ --cdrom=/var/lib/libvirt/boot/CentOS-8.3.2011-x86_64-boot.iso \ --network=bridge=br0,model=virtio \ --graphics vnc \ --disk path=/var/lib/libvirt/images/centos8.qcow2,size=20,bus=virtio,format=qcow2 virsh dumpxml rhel8-server | grep vnc You need to use an SSH client to setup tunnel and a VNC client to access the remote vnc VM display. Type the following SSH port forwarding command from your client/desktop: ssh root@10.6.0.136 -L 5906:127.0.0.1:5906 List images virt-builder --list virt-builder --list | egrep -i 'debian|ubuntu' virt-builder --list | egerp -i centos virt-install \ --name fed29 \ --ram 1024 \ --vcpus 1 \ --disk path=/var/lib/libvirt/images/fed29.img,size=20 \ --os-variant fedora29 \ --os-type linux \ --network bridge=br0 \ --graphics none \ --console pty,target_type=serial \ --location 'http://fedora.inode.at/releases/29/Server/x86_64/os/' \ --extra-args 'console=ttyS0,115200n8 serial' virt-install \ --name ubu-vm-01 \ --vcpus 2 \ --memory 2048 \ --disk size=8,bus=virtio,format=qcow2 \ --boot kernel=/var/lib/libvirt/images/kernel.ubuntu,initrd=/var/lib/libvirt/images/initrd.ubuntu \ --network bridge=br0 \ --graphics none \ --console pty,target_type=sclp \ --cdrom /var/lib/libvirt/images/ubuntu-17.10-server-s390x.iso
  14. I have the need to figure out what is the best network monitoring software (free) onsite (none of that cloud stuff). Totally can not handle all the false alerts we get from the Windows based Solarwinds Orion monitoring tool. I have two primary areas of concern to monitor Infoblox F5 GTM LTM ASM Some of the contenders include: Check_MK Raw Edition: free forever Enterprise (Standard Edition): $600/year F5 LTM: integration (https://checkmk.com/integrations/f5_bigip_vserver) and (https://checkmk.com/integrations/f5_bigip_pool) and (https://checkmk.com/integrations/f5_bigip_cluster_v11) and (https://checkmk.com/integrations/f5_bigip_conns) and (https://checkmk.com/integrations/f5_bigip_vcmpfailover) and (https://checkmk.com/integrations/f5_bigip_snat) Zabbix Free F5 LTM: integration (https://github.com/c6h3un/zabbix-f5) and (https://github.com/blka/zabbix-3.0-BIG-IP-F5) LibreNMS Free F5 LTM: integration
  15. How to fix when you get the following error $ sudo yum update -y Loaded plugins: fastestmirror, product-id, search-disabled-repos Loading mirror speeds from cached hostfile * base: mirror.sjc02.svwh.net * extras: repos.hou.layerhost.com * updates: centos.host-engine.com base | 3.6 kB 00:00:00 extras | 2.9 kB 00:00:00 updates | 2.9 kB 00:00:00 updates/7/x86_64/primary_db FAILED http://mirror.centos.org/centos/7/updates/x86_64/repodata/54f2da0acb91b16db376e53c5ce78603a807fb6f45f99d8aa76112d58f722adb-primary.sqlite.bz2: [Errno 12] Timeout on http://mirror.centos.org/centos/7/updates/x86_64/repodata/54f2da0acb91b16db376e53c5ce78603a807fb6f45f99d8aa76112d58f722adb-primary.sqlite.bz2: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 1 seconds') Trying other mirror. updates/7/x86_64/primary_db FAILED First check your repos by running yum repolist Try and clean the metadata yum clean metadata
  16. I utilize nmap and netcat (nc) often to validate no firewall is blocking traffic. Below you will find some of the userful commands I use [root@usdet1lvdwb002 ~]# nmap -sS 10.47.38.55 -p443,4889,5443,8081,8443 --reason Starting Nmap 7.70 ( https://nmap.org ) at 2022-04-07 14:12 EDT Nmap scan report for oem-dev.int.thezah.com (10.47.38.55) Host is up, received syn-ack ttl 244 (0.037s latency). PORT STATE SERVICE REASON 443/tcp open https syn-ack ttl 244 4889/tcp open unknown syn-ack ttl 244 5443/tcp open spss syn-ack ttl 244 8081/tcp open blackice-icecap syn-ack ttl 244 8443/tcp open https-alt syn-ack ttl 244 Nmap done: 1 IP address (1 host up) scanned in 0.52 seconds [root@usdet1lvdwb002 ~]# nmap 10.47.38.55 -p443,4889,5443,8081,8443 -sF --scanflags URGPSH Starting Nmap 7.70 ( https://nmap.org ) at 2022-04-07 14:17 EDT Nmap scan report for oem-dev.int.thezah.com (10.47.38.55) Host is up (0.036s latency). PORT STATE SERVICE 443/tcp closed https 4889/tcp closed unknown 5443/tcp closed spss 8081/tcp closed blackice-icecap 8443/tcp closed https-alt Nmap done: 1 IP address (1 host up) scanned in 0.55 seconds [root@usdet1lvdwb002 ~]# nmap -sP -PS443,4889,5443,8081,8443 10.47.38.55 Starting Nmap 7.70 ( https://nmap.org ) at 2022-04-07 14:19 EDT Nmap scan report for oem-dev.int.thezah.com (10.47.38.55) Host is up (0.036s latency). Nmap done: 1 IP address (1 host up) scanned in 0.26 seconds [root@usdet1lvdwb002 ~]# nmap -sP -PA443,4889,5443,8081,8443 10.47.38.55 Starting Nmap 7.70 ( https://nmap.org ) at 2022-04-07 14:21 EDT Nmap scan report for oem-dev.int.thezah.com (10.47.38.55) Host is up (0.037s latency). Nmap done: 1 IP address (1 host up) scanned in 0.27 seconds [root@usdet1lvdwb002 ~]# nmap -sV -p443 10.47.38.55 --version-all Starting Nmap 7.70 ( https://nmap.org ) at 2022-04-07 14:25 EDT Nmap scan report for oem-dev.int.thezah.com (10.47.38.55) Host is up (0.036s latency). PORT STATE SERVICE VERSION 443/tcp open ssl/https 1 service unrecognized despite returning data. If you know the service/version, please submit the following fingerprint at https://nmap.org/cgi-bin/submit.cgi?new-service : SF-Port443-TCP:V=7.70%T=SSL%I=9%D=4/7%Time=624F2CA3%P=x86_64-redhat-linux- SF:gnu%r(GetRequest,232,"HTTP/1\.1\x20200\x20OK\r\nDate:\x20Thu,\x2007\x20 SF:Apr\x202022\x2018:26:05\x20GMT\r\nX-Content-Type-Options:\x20nosniff\r\ SF:nX-XSS-Protection:\x201;\x20mode=block\r\nX-ORCL-EMOA:\x20true\r\nConte SF:nt-Length:\x20337\r\nConnection:\x20close\r\nContent-Type:\x20text/html SF:;charset=ISO-8859-1\r\n\r\n<!DOCTYPE\x20HTML\x20PUBLIC\x20\"-//W3C//DTD SF:\x20HTML\x203\.2\x20Final//EN\">\n<html>\n\x20<head>\n\x20\x20<title>In SF:dex\x20of\x20/</title>\n\x20</head>\n\x20<body>\n<h1>Index\x20of\x20/</ SF:h1>\n<ul><li><a\x20href=\"bipunavailable\.html\">\x20bipunavailable\.ht SF:ml</a></li>\n<li><a\x20href=\"favicon\.ico\">\x20favicon\.ico</a></li>\ SF:n<li><a\x20href=\"omsunavailable\.html\">\x20omsunavailable\.html</a></ SF:li>\n</ul>\n</body></html>\n")%r(HTTPOptions,F5,"HTTP/1\.1\x20200\x20OK SF:\r\nDate:\x20Thu,\x2007\x20Apr\x202022\x2018:26:05\x20GMT\r\nX-Content- SF:Type-Options:\x20nosniff\r\nX-XSS-Protection:\x201;\x20mode=block\r\nAl SF:low:\x20OPTIONS,HEAD,GET,POST\r\nX-ORCL-EMOA:\x20true\r\nContent-Length SF::\x200\r\nConnection:\x20close\r\nContent-Type:\x20httpd/unix-directory SF:\r\n\r\n")%r(FourOhFourRequest,1B7,"HTTP/1\.1\x20404\x20Not\x20Found\r\ SF:nDate:\x20Thu,\x2007\x20Apr\x202022\x2018:26:06\x20GMT\r\nX-Content-Typ SF:e-Options:\x20nosniff\r\nX-XSS-Protection:\x201;\x20mode=block\r\nConte SF:nt-Length:\x20225\r\nConnection:\x20close\r\nContent-Type:\x20text/html SF:;\x20charset=iso-8859-1\r\n\r\n<!DOCTYPE\x20HTML\x20PUBLIC\x20\"-//IETF SF://DTD\x20HTML\x202\.0//EN\">\n<html><head>\n<title>404\x20Not\x20Found< SF:/title>\n</head><body>\n<h1>Not\x20Found</h1>\n<p>The\x20requested\x20U SF:RL\x20/nice\x20ports,/Trinity\.txt\.bak\x20was\x20not\x20found\x20on\x2 SF:0this\x20server\.</p>\n</body></html>\n")%r(RTSPRequest,1BA,"HTTP/1\.1\ SF:x20400\x20Bad\x20Request\r\nDate:\x20Thu,\x2007\x20Apr\x202022\x2018:26 SF::16\x20GMT\r\nX-Content-Type-Options:\x20nosniff\r\nX-XSS-Protection:\x SF:201;\x20mode=block\r\nContent-Length:\x20226\r\nConnection:\x20close\r\ SF:nContent-Type:\x20text/html;\x20charset=iso-8859-1\r\n\r\n<!DOCTYPE\x20 SF:HTML\x20PUBLIC\x20\"-//IETF//DTD\x20HTML\x202\.0//EN\">\n<html><head>\n SF:<title>400\x20Bad\x20Request</title>\n</head><body>\n<h1>Bad\x20Request SF:</h1>\n<p>Your\x20browser\x20sent\x20a\x20request\x20that\x20this\x20se SF:rver\x20could\x20not\x20understand\.<br\x20/>\n</p>\n</body></html>\n"); Service detection performed. Please report any incorrect results at https://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 373.98 seconds You can also create an input txt file with IP and port like this 10.47.38.55 443,4889,5443,8081,8443 10.45.12.47 80,443,22 Then create your script on your linux box that looks like this #!/bin/bash if [ ! -f $1 ] then echo "Error: Must supply file" exit fi while read -r line do host=`echo $line | cut -d" " -f1` port=`echo $line | cut -d" " -f2` echo "Scanning $host : $port " nmap $host -p $port done < $1 you would then run at your command prompt on your linux box where these two files are: bash script.sh input.txt VERY VERY helpful.
  17. So I ran into a gotcha the other day and so I figured it would be a good idea to capture it on our Tech Blog. To make things easier we all use scripts to produce information as a user logs in. Previously (years back) I followed some direction and added a shell script (login-info.sh) to /etc/profile.d since anything with the extension of .sh runs during login but that includes everybody. Probably not a big deal until you try and run scp which then breaks since its running this script so you end up getting a protocol error and I believe its something to do with $TERM Anyhow, more research I found this to be true in reference on where to place scripts Scope Shell Script or directory to modify User Bash ~/.bash_profile User Bourne or Korn ~/.profile User C (csh) ~/.login Global (SystemWide) Bash /etc/profile Global (All users except root) Bash /etc/profile.d root Bash /root/.bash_profile I originally had the following script under /etc/profile.d which prevented everyone from being able to utilize s Then I removed the file from /etc/profile.d and I could use scp again Decided to try and copy the contents of this file and paste at the end of /etc/profile which is assigned to all users (system wide) and I was working again and scp continues to work. ###### LOGIN SCRIPT ######## clear #figlet -f slant $(hostnamectl --pretty) echo " __ _ _______ _______ _ _ _ _____ ______ _ _ "; echo " | \ | |______ | | | | | | |_____/ |____/ "; echo " | \_| |______ | |__|__| |_____| | \_ | \_ "; echo " "; echo " _____ _ _ _______ _____ ______ _____ _ _"; echo " | | | | | | |_____] |_____] | | \___/ "; echo " __| |_____| | | | | |_____] |_____| _/ \_"; echo " "; printf "\n" printf "\t- %s\n\t- Kernel %s\n" "$(cat /etc/redhat-release)" "$(uname -r)" printf "\n" echo " Welcome: $u" date=`date` load=`cat /proc/loadavg | awk '{print $1}'` root_usage=`df -h / | awk '/\// {print $(NF-1)}'` memory_usage=`free -m | awk '/Mem:/ { total=$2 } /buffers\/cache/ { used=$3 } END { printf("%3.1f%%", used/total*100)}'` swap_usage=`free -m | awk '/Swap/ { printf("%3.1f%%", "exit !$2;$3/$2*100") }'` users=`users | wc -w` olusers=`who | cut -d' ' -f1 | sort | uniq` time=`uptime | grep -ohe 'up .*' | sed 's/,/\ hours/g' | awk '{ printf $2" "$3 }'` processes=`ps aux | wc -l` ethup=$(ip -4 ad | grep 'state UP' | grep -v virbr0 | awk -F ":" '!/^[0-9]*: ?lo/ {print $2}') ip=$(ip ad show dev $ethup |grep -v inet6 | grep inet|awk '{print $2}') echo "System information as of: $date" echo printf "System load:\t%s\tIP Address:\t%s\n" $load $ip printf "Memory usage:\t%s\tSystem uptime:\t%s\n" $memory_usage "$time" printf "Usage on /:\t%s\tSwap usage:\t%s\n" $root_usage $swap_usage printf "Local Users:\t%s\tProcesses:\t%s\n" $users $processes echo " ===================================================================== - Users Logged on.....: $olusers ===================================================================== - Users that need access to share files add them to the restricted group --> sftp user@10.11.24.11 (they are jailed to there home directory) - ===================================================================== " echo [ -f /etc/motd.tail ] && cat /etc/motd.tail || true
  18. Probably the easiest command and most popular one is using the ip command. Example of showing all routes [root@usdet1lvdwb002 ~]# ip route list default via 10.6.0.129 dev ens224 proto static metric 100 10.6.0.128/27 dev ens224 proto kernel scope link src 10.6.0.136 metric 100 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 172.18.0.0/16 dev br-d21cec45669b proto kernel scope link src 172.18.0.1 You can use the ip route get <address> command to ask the kernel to report the route it would use to send a packet to the specified address: [root@usdet1lvdwb002 ~]# ip route get 10.11.24.11 10.11.24.11 via 10.6.0.129 dev ens224 src 10.6.0.136 uid 0 cache Another way is running this command [root@usdet1lvdwb002 ~]# ip route show to match 10.11.24.11 default via 10.6.0.129 dev ens224 proto static metric 100 10.6.0.129 is my default route. If I ask for an address that would not go over the default route: [root@usdet1lvdwb002 ~]# ip route get 10.6.0.130 10.6.0.130 dev ens224 src 10.6.0.136 uid 0 cache Since the IP is in the same subnet it doesn't need the route IP. I like using tracepath to see how the traffic is going to a certain IP Address. Below is an example [root@usdet1lvdwb002 ~]# tracepath 10.44.112.234 1?: [LOCALHOST] pmtu 1500 1: _gateway 16.178ms 1: _gateway 15.372ms 2: dennis.gearcrushers.com 0.329ms 3: 10.10.254.13 0.427ms asymm 4 4: no reply 5: 100.65.0.121 46.292ms asymm 6 6: 100.65.0.122 36.879ms asymm 11 7: ussat1-ccs0001_eth1-57.gearcrushers.com 43.124ms asymm 12 8: ussat1-dcs0001_eth1-1_10.gearcrushers.com 45.858ms asymm 12 9: gns4.gearcrushers.com 37.880ms reached Resume: pmtu 1500 hops 9 back 12 So instead of using the ip command you can use the route command. Using the following command will display the list of routes currently configured [root@usdet1lvdwb002 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.6.0.129 0.0.0.0 UG 100 0 0 ens224 10.6.0.128 0.0.0.0 255.255.255.224 U 100 0 0 ens224 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-d21cec45669b There are other tools you can use besides route like netstat This command gives the statistics about the network. With this command you can do more than just printing routing table, Print network connections routing tables interface statistics masquerade connections multicast memberships To check the routing table run the following command (-r is to display the routing table and -n is to not resolve the names and just print the IP addresses only) [root@usdet1lvdwb002 ~]# netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 10.6.0.129 0.0.0.0 UG 0 0 0 ens224 10.6.0.128 0.0.0.0 255.255.255.224 U 0 0 0 ens224 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-d21cec45669b
  19. Here are the factory settings from a default CentOS 7 box default_profile_centos7.txt
  20. Turns out the dnf has changed the way it deals with proxies. If you’re using a basic proxy authentication then you need to specify it: vi /etc/dnf/dnf.conf # proxy settings proxy=http://proxy.domain.com:3128/ proxy_username=username proxy_password=password proxy_auth_method=basic
  21. I use a terminal program on my Mac called zoc by emtec and in comparison to all the other program terminals I have used, its by far the best all around program. With mac I have tried iTerm2 (garbage and very feature less), MacTerm, In your home directory (so I just type cd and press enter which bring me there) I type vi .bash_profile and my bash profile looks like the one below which gives me color. Its really about the PS1 command mainly. # .bash_profile if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/.local/bin:$HOME/bin export PATH export {http,https,ftp}_proxy="http://NAO\dhosang:qMtzWRhSTZD8rNHm@10.43.196.154:80" [[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm" # Load RVM into a shell session *as a function* parse_git_branch() { git branch 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/ (\1)/' } # PS1 Base :x # Origin [\u@\h \W]\$ export PS1="\u@\h \[\033[32m\]\w\[\033[33m\]\$(parse_git_branch)\[\033[00m\] $ " LS_COLORS="di=4;33" ..
  22. Okay today, having a bad day. Tried to do an yum update and it locked out my user account. dhosang@usdet1lvdwb001:$ sudo yum update -y Loaded plugins: fastestmirror, product-id, search-disabled-repos Determining fastest mirrors Could not get metalink https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=x86_64&infra=stock&content=centos error was 14: curl#22 - "Invalid file descriptor" * base: mirror.dal.nexril.net * centos-sclo-rh: centos.mirrors.tds.net * centos-sclo-sclo: repos.lax.layerhost.com * epel: mirror.arizona.edu * extras: centos.mirrors.tds.net * remi-php72: mirror.team-cymru.com * remi-safe: mirror.team-cymru.com * rpmfusion-free-updates: mirror.math.princeton.edu * updates: repos.mia.quadranet.com https://ci.tuleap.net/yum/tuleap/rhel/6/dev/x86_64/repodata/repomd.xml: [Errno 14] curl#22 - "Invalid file descriptor" Trying other mirror. http://mirror.chpc.utah.edu/pub/centos/7.9.2009/os/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 407 - Proxy Authentication Required Trying other mirror. http://centos.mirror.lstn.net/7.9.2009/os/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 407 - Proxy Authentication Required Trying other mirror. The above is definitely cut down from the pages long of Proxy Authentication Required error messages that eventually locks my account out. So once I unlocked the account that is displayed when I type echo $http_proxy I do a quick test to see if I have internet access by running: dhosang@usdet1lvdwb001:$ curl -I https://thezah.com HTTP/1.1 200 Connection established HTTP/1.1 200 OK Date: Wed, 02 Dec 2020 18:56:36 GMT Content-Type: text/html;charset=UTF-8 Pragma: no-cache X-IPS-LoggedIn: 0 Vary: cookie,Accept-Encoding,User-Agent X-XSS-Protection: 0 X-Frame-Options: sameorigin Expires: Wed, 02 Dec 2020 18:57:06 GMT Cache-Control: max-age=30, public Last-Modified: Wed, 02 Dec 2020 18:56:36 GMT CF-Cache-Status: DYNAMIC cf-request-id: 06c66958fd0000f36115800000000001 Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct" Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report?s=8ihM44Rlq7k6fVvaopR4jDTQ6o5jmyxqBw4Lkp%2B2TKsSw4dqrJ1IWbMRD%2FMy%2Fp9pnYGRTyUMgWdMbbQNcNWfIiHIIS9qDhdN1ux9"}],"group":"cf-nel","max_age":604800} NEL: {"report_to":"cf-nel","max_age":604800} Server: cloudflare CF-RAY: 5fb744d4cb58f361-ATL Proxy-Connection: Keep-Alive Connection: Keep-Alive Set-Cookie: __cfduid=d75cc7060b11361cce1c54a2cf72f113d1606935396; expires=Fri, 01-Jan-21 18:56:36 GMT; path=/; domain=.thezah.com; HttpOnly; SameSite=Lax; Secure Set-Cookie: ips4_IPSSessionFront=49e2f0fb3cb3740e6db92fccb9b7b35c; path=/; secure; HttpOnly Set-Cookie: ips4_guestTime=1606935396; path=/; secure; HttpOnly All that is really important is that you got HTTP/1.1 200 OK So what's next.. you proved that you can get to the internet fine but your yum update or yum upgrade is failing proxy authentication. Try doing a search for something like wireshark dhosang@usdet1lvdwb001:~$ sudo yum search wireshark Loaded plugins: fastestmirror, product-id, search-disabled-repos Loading mirror speeds from cached hostfile * base: mirror.dal.nexril.net * centos-sclo-rh: centos.mirrors.tds.net * centos-sclo-sclo: repos.lax.layerhost.com * epel: mirror.arizona.edu * extras: centos.mirrors.tds.net * remi-php72: mirror.team-cymru.com * remi-safe: mirror.team-cymru.com * rpmfusion-free-updates: mirror.math.princeton.edu * updates: repos.mia.quadranet.com =========================================================================== N/S matched: wireshark ============================================================================ wireshark-devel.i686 : Development headers and libraries for wireshark wireshark-devel.x86_64 : Development headers and libraries for wireshark wireshark-gnome.x86_64 : Gnome desktop integration for wireshark wireshark.i686 : Network traffic analyzer wireshark.x86_64 : Network traffic analyzer Name and summary matches only, use "search all" for everything. This clearly shows that yum is getting through the proxy... but wait, you still are getting proxy authentication errors when trying to do a yum update?
  23. sudo dnf remove --duplicates Tried again: sudo dnf install 'dnf-command(config-manager)' --allowerasing Running transaction check Error: transaction check vs depsolve: (flatpak-selinux = 1.6.2-3.el8_2 if selinux-policy-targeted) is needed by flatpak-1.6.2-3.el8_2.x86_64 rpmlib(RichDependencies) <= 4.12.0-1 is needed by flatpak-1.6.2-3.el8_2.x86_64 To diagnose the problem, try running: 'rpm -Va --nofiles --nodigest'. You probably have corrupted RPMDB, running 'rpm --rebuilddb' might fix the issue. The downloaded packages were saved in cache until the next successful transaction. You can remove cached packages by executing 'dnf clean packages'. [dhosang@net1 ~]$ -rw-r--r-- 1 root root 173 Jul 12 2019 google-chrome.repo -rw-r--r-- 1 root root 1203 Dec 18 2019 epel-testing.repo -rw-r--r-- 1 root root 1266 Dec 18 2019 epel-testing-modular.repo -rw-r--r-- 1 root root 1104 Dec 18 2019 epel.repo -rw-r--r-- 1 root root 1249 Dec 18 2019 epel-playground.repo -rw-r--r-- 1 root root 1167 Dec 18 2019 epel-modular.repo -rw-r--r-- 1 root root 928 Jun 2 21:02 CentOS-Media.repo -rw-r--r-- 1 root root 338 Jun 2 21:02 CentOS-fasttrack.repo -rw-r--r-- 1 root root 756 Jun 2 21:02 CentOS-Extras.repo -rw-r--r-- 1 root root 668 Jun 2 21:02 CentOS-Debuginfo.repo -rw-r--r-- 1 root root 1043 Jun 2 21:02 CentOS-CR.repo -rw-r--r-- 1 root root 712 Jun 2 21:02 CentOS-Base.repo -rw-r--r-- 1 root root 731 Jun 2 21:02 CentOS-AppStream.repo -rw-r--r-- 1 root root 1075 Nov 3 15:15 epel.repo.rpmsave -rw-r--r-- 1 root root 798 Nov 3 15:21 CentOS-centosplus.repo -rw-r--r-- 1 root root 738 Nov 3 15:24 CentOS-HA.repo -rw-r--r-- 1 root root 736 Nov 3 15:25 CentOS-PowerTools.repo -rw-r--r-- 1 root root 1382 Nov 3 15:25 CentOS-Sources.repo -rw-r--r-- 1 root root 743 Nov 3 15:27 CentOS-Devel.repo Official Centos Repos [Base] – The packages that make up Centos, as it is released on the ISOs. It is enabled by default [Updates] – Updated packages to [Base] released after the Centos ISOs. This will be Security, BugFix, or Enhancements to the [Base] software. It is enabled by default [Addons] – Contains packages required in order to build the main Distribution or packages produced by SRPMS built in the main Distribution, but not included in the main Red Hat package tree (mysql-server in Centos-3.x falls into this category). Packages contained in the addons repository should be considered essentially a part of the core distribution, but may not be in the upstream Package tree. It is enabled by default [Contrib] – Packages contributed by the Centos Users, which do not overlap with any of the core Distribution packages. These packages have not been tested by the Centos developers, and may not track upstream version releases very closely. It is disabled by default [Centosplus] – Packages contributed by Centos developers and the users. These packages might replace rpm’s included in the core distribution. You should understand the implications of enabling and using packages from this repository. It is disabled by default [csgfs] – Packages that make up the Cluster Suite and Global File System. It is disabled by default [Extras] – Packages built and maintained by the Centos developers that add functionality to the core distribution. These packages have undergone some basic testing, should track upstream release versions fairly closely and will never replace any core distribution package. It is enabled by default [Testing] – Packages that are being tested prior to release, you should not use this repository except for a specific reason. It is disabled by default You can have a look at the packages here: http://dev.centos.org/centos/6/ http://dev.centos.org/centos/7/ Base Repository: Updates Repository: Addons Repository: Contrib Repository: Centosplus Repository: CSGFS: Extras: Testing: Section 2 Then tried sudo rpm --rebuilddb
  24. Total 1.3 MB/s | 755 MB 09:36 Running transaction check Error: transaction check vs depsolve: (flatpak-selinux = 1.6.2-3.el8_2 if selinux-policy-targeted) is needed by flatpak-1.6.2-3.el8_2.x86_64 rpmlib(RichDependencies) <= 4.12.0-1 is needed by flatpak-1.6.2-3.el8_2.x86_64 To diagnose the problem, try running: 'rpm -Va --nofiles --nodigest'. You probably have corrupted RPMDB, running 'rpm --rebuilddb' might fix the issue. The downloaded packages were saved in cache until the next successful transaction. You can remove cached packages by executing 'dnf clean packages'. [dhosang@net1 ~]$
  25. When I try and run sudo dnf update I get a bunch of errors that state ... conflicts with file from package ... So in researching the wonderful world of the web I found a suggestion to check for duplicates and if running the following command produces any results, you are in a bad way. sudo dnf repoquery --duplicated [dennis@net1 ~]$ sudo dnf repoquery --duplicated Extra Packages for Enterprise Linux 8 - x86_64 0.0 B/s | 0 B 00:00 Docker CE Stable - x86_64 0.0 B/s | 0 B 00:00 Failed to synchronize cache for repo 'epel', ignoring this repo. Failed to synchronize cache for repo 'docker-ce-stable', ignoring this repo. Last metadata expiration check: 0:39:17 ago on Tue 03 Nov 2020 11:49:50 AM EST. kernel-devel-0:3.10.0-1127.10.1.el7.x86_64 kernel-devel-0:3.10.0-1127.13.1.el7.x86_64 kernel-devel-0:3.10.0-1127.18.2.el7.x86_64 kernel-devel-0:3.10.0-1127.19.1.el7.x86_64 kernel-devel-0:3.10.0-1127.el7.x86_64 [dennis@net1 ~]$ So as you can see, I'm in a bad way. Since I'm running this server on proxmox, I went to the GUI and backed up this virtual before I run the next command which "could" render the server inaccessible (so I need the ability to restore) sudo dnf --disableplugin=protected_packages remove $(sudo dnf repoquery --duplicated --latest-limit -1 -q)
  26. First it would be helpful to get a list of users that are already on your Linux box. Get a List of All Users using the /etc/passwd File Local user information is stored in the /etc/passwd file. Each line in this file represents login information for one user. less /etc/passwd Below is an example $ less /etc/passwd root:x:0:0:root:/root:/bin/bash bin:x:1:1:bin:/bin:/sbin/nologin daemon:x:2:2:daemon:/sbin:/sbin/nologin adm:x:3:4:adm:/var/adm:/sbin/nologin lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin sync:x:5:0:sync:/sbin:/bin/sync shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown halt:x:7:0:halt:/sbin:/sbin/halt mail:x:8:12:mail:/var/spool/mail:/sbin/nologin operator:x:11:0:operator:/root:/sbin/nologin games:x:12:100:games:/usr/games:/sbin/nologin ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin nobody:x:99:99:Nobody:/:/sbin/nologin systemd-network:x:192:192:systemd Network Management:/:/sbin/nologin dbus:x:81:81:System message bus:/:/sbin/nologin polkitd:x:999:997:User for polkitd:/:/sbin/nologin postfix:x:89:89::/var/spool/postfix:/sbin/nologin sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin tss:x:59:59:Account used by the trousers package to sandbox the tcsd daemon:/dev/null:/sbin/nologin nginx:x:998:996:nginx user:/var/cache/nginx:/bin/sh mysql:x:27:27:MariaDB Server:/var/lib/mysql:/sbin/nologin apache:x:48:48:Apache:/usr/share/httpd:/sbin/nologin dockerroot:x:997:993:Docker User:/var/lib/docker:/sbin/nologin netadm1n:x:1000:1000:netadm1n:/home/netadm1n:/bin/bash Each line has seven fields delimited by colons that contain the following information: User name Encrypted password (x means that the password is stored in the /etc/shadow file) User ID number (UID) User’s group ID number (GID) Full name of the user (GECOS) User home directory Login shell (defaults to /bin/bash) If you want to display only the username you can use either awk or cut commands to print only the first field containing the username: Using awk example: $ awk -F: '{ print $1}' /etc/passwd root bin daemon adm lp sync shutdown halt mail operator games ftp nobody systemd-network dbus polkitd postfix sshd tss nginx mysql apache dockerroot netadm1n Using cut example: $ cut -d: -f1 /etc/passwd root bin daemon adm lp sync shutdown halt mail operator games ftp nobody systemd-network dbus polkitd postfix sshd tss nginx mysql apache dockerroot netadm1n So you may have identified your Linux system doesn't have a user on it that needs to exist. Let's go to the next section that describes how to add a user How to Create Users in Linux In Linux, you can create a user account and assign the user to different groups using the useradd command. The general syntax for the useradd command is as follows: useradd [OPTIONS] USERNAME NOTE: To be able to use the useradd command and create new users you need to be logged in as root or a user with sudo access. To create a new user account type useradd followed by the username. For example to create a new user named username you would run: useradd username The command adds an entry to /etc/passwd /etc/shadow /etc/group /etc/gshadow files To be able to log in as the newly created user, you need to set the user password. To do that run the passwd command followed by the username: passwd username You will be prompted to enter and confirm the password. In most Linux distros, when creating a new user account with the useradd command the user home directory is not created. Use the -m (--create-home) option to create the user home directory as /home/username: useradd -m username The command above creates the new user’s home directory and copies files from /etc/skel directory to the user’s home directory.
  27.  
×
×
  • Create New...