Plesk 10.4 for Linux/Unix
Plesk 10.4 for Windows
Amid a upgrade up to Parallels Plesk Panel (PP) 10.4.4, DNS records in zones in which the IP location of the primary A record is not quite the same as the IP location of the domain hosting, and this domain has subdomains, will be changed.
There is a bug in the upgrade to variant 10.4.4 procedure.
Before moving up to PP 10.4.4, utilize the Pre-Upgrade Checker to figure out which DNS zones on your server will be influenced.
In the wake of upgrade to PP 10.4.4, utilize the DNS Zone Validation Tool to figure out which records were added to or erased from DNS zones.
At that point you will have the capacity to alter these DNS zones manually utilizing the given data or by utilizing a generated SQL script that permits you to rollback changes that have been made amid the update.
Output example of Pre-Upgrade Checker:
[INFO] ==> STEP 26: Checking for customized DNS zones which will be modified during upgrade… [INFO] A or AAAA records in DNS zone domain.com will be modified or deleted after upgrade. [WARNING] A or AAAA records in DNS zones will be modified or deleted after upgrade. [INFO] Result: Warning
Output example of DNS Zone Validation Tool:
# php -d safe_mode=0 dns_zone_validator.php [INFO] ==> Installed Plesk version/build: 10.4.4 CentOS 5 1013111102.18 [INFO] ==> Detect system configuration [INFO] OS: CentOS release 5.2 (Final) Kernel \r on an \m [INFO] Arch: i386 [INFO] ==> Validate given db password [INFO] Result: OK [INFO] ==> Plesk DNS zones validator version: 10.4.0.27 [INFO] Found pre-upgrade database dump mysql.preupgrade.8.6.0-10.13.4.20120327-083339.dump.gz [INFO] Decompressing pre-upgrade database dump to /var/lib/psa/dumps/unpacked_3f7c10c5f773fc2d0af632580ab61b40 [INFO] Extracting necessary tables to file /var/lib/psa/dumps/prepared_3f7c10c5f773fc2d0af632580ab61b40 [INFO] Creating temporary database DnsZoneValidator_3f7c10c5f773fc2d0af632580ab61b40 [INFO] Deleting file /var/lib/psa/dumps/unpacked_3f7c10c5f773fc2d0af632580ab61b40 [INFO] Deleting file /var/lib/psa/dumps/prepared_3f7c10c5f773fc2d0af632580ab61b40 [INFO] Following DNS record was deleted from zone domain.com: domain.com. A 10.52.52.105 [INFO] Following DNS record was deleted from zone domain2.com: ns.domain2.com. A 10.52.52.104 [INFO] Following DNS record was deleted from zone domain2.com: domain2.com. A 10.52.52.104 [INFO] Following DNS record was deleted from zone domain2.com: webmail.domain2.com. A 10.52.52.104 [INFO] Following DNS record was deleted from zone domain2.com: mail.domain2.com. A 10.52.52.104 [INFO] Following DNS record was deleted from zone domain2.com: sub1.domain2.com. A 10.52.52.104 [INFO] Following DNS record was added to zone domain2.com: domain2.com. A 10.52.52.104 [INFO] Following DNS record was added to zone domain2.com: ns.domain2.com. A 10.52.52.105 [INFO] Following DNS record was added to zone domain2.com: mail.domain2.com. A 10.52.52.105 [INFO] Following DNS record was added to zone domain2.com: webmail.domain2.com. A 10.52.52.105 [INFO] Following DNS record was added to zone domain2.com: sub1.domain2.com. A 10.52.52.105 [INFO] Deleting temporary database DnsZoneValidator_3f7c10c5f773fc2d0af632580ab61b40 [INFO] For fix this records directly in database you can use SQL file: /root/fix_dns_records.sh Found errors: 0; Found Warnings: 0 [[email protected] ~]#
The script generates a shell file with commands that fix the described changes.
On the off chance that DNS records that were changed by the upgrade are modified or altered physically before running the script, the script will offer the alternative to add such records to the zone (which did not exist at the time of the script start-up), as they were before the update.
Is it a wrong time for Internet? No, it isn’t, though it is basically the conjuring trick unveiled by the researchers. Highlights the flaws in the Domain Name System Security Extension (DNSSEC) configured domain. Since past couple of months, the Internet experienced a massive number of DNS reflection and amplification DDOS attacks – abusing DNS Security Extension i.e. DNSSEC configured domain. Mainly, the financial services have been targeted by DNS amplification attack operations.
DNSSEC is the protocol that provides authentication of DNS data (the system of correspondence between URLs and IP addresses. By using the very short lifetime DNSSEC certificates, an attacker can also use:
And can drop all connections to trusted domains with DNSSEC. The first two (NTP amp & DNS amp 1.x) are common DDoS reflections and amplification vectors. The third (Dominate TCP attack script) is a modified type of improved SYN (ESSYN) attack. These attack scenarios, combining manipulation of the clock and life certificates can also be used against cloud services.
The third attack type mentioned above i.e. Dominant TCP Attack Script is the only that spoofs source IPs. This attack mainly consists three sets of flags, 1) SYN 2) CWR 3) ECN (Explicit Congestion Notification).
DNS Amplification Attack is a DDoS attack variant that allows an attacker to increase the effect through the exploitation of DNS servers that use recursion for queries and an extension of the DNS protocol, the EDNS9. The purpose of attacking is to send a small number of requests with the expectation of invoking a much bigger response. First of all, the attacker consist a DNS query for a resource record that knows how to imply a much larger demand response.
The attacker may obtain this effect by attacking a DNS server that previously allowed successful intrusion, editing the areas of the same server to insert an amplification resource record. Next, the attacker retrieves a list of open recursive servers that contact recursively and return them the amplification record, it created with the destination IP what the victim will be flooded by a large volume of traffic until the collapse. For all this, the attacker needs a large number of sources for the attack. Those who use this type of DDoS intrusion typically use botnets for the greatest number of possible queries.
The DNS Amplification Attack (aka. DNS Reflection Attack) is a popular form of attack of DDoS (distributed denial of service types) attack based on the use of Open DNS servers, and, therefore, accessible to all. Basically, an improper configuration of DNS is at the heart of this type DDoS. So let’s see how to solve this problem.
The distributed denial of service (DDoS) can take many forms that disturb the normal functioning of a website or online service. The DNS servers provide the basic infrastructure for the Internet and help to direct traffic to the location of the correct IP address. In a DNS amplification attack, the attacker takes advantage of a bad configuration in a DNS server to flood the server with DNS response traffic, creating a comparable flow of DDoS.
The weak link in the chain allows DNS amplification attacks to create recursive DNS configuration problems. The root cause of the wrong configuration is that the recursive DNS server, which is configured to only respond to local issues and is open to requests from any system.
The technical base of this attack consists of sending a query to a recursive DNS server with open source address spoofed to be the address of the victim. When the DNS server sends the response to the DNS record, it is sent instead to the victim. Because the size of the response is usually much greater than the demand, the attacker is able to increase the volume of traffic directed to the victim.
Attackers can further enhance the magnitude of the DNS amplification attack if they have a botnet that is then able to make even more of DNS queries, increasing the size of the final DDoS attack.
Misconfigured DNS servers are not a new phenomenon on the Internet. In 2007, the DNS service provider Infoblox found that more than half of DNS servers surveyed at the time were wide open to recursive queries from anywhere.
The risk of recursive DNS resolvers is still open. Unlike the traditional botnets that could only generate limited volume traffic because of the Internet connections and modest victims’ computers, these resolvers are usually open running on large servers with huge bandwidth. They are like bazookas and in the event of attack they can cause the massive damage.
IT system administrators can use the site openresolverproject.org to scan their own IP address space to see if they have an open recursive resolver that the project already publicly indexed. A similar tool is available from the measurement space and test resolver dnsinspect.com, which also provides an online tool for system admins to control misconfigured DNS servers.
The first step in preventing and mitigating the risk of DNS amplification attacks is to properly configure the recursive DNS servers. It has been noticed that many DNS servers are to be used for a single domain and then have to be enabled recursion.
For DNS servers that are deployed within an organization or ISP to support name queries on behalf of a client, the resolver must be configured to only accept requests on behalf of allowed clients. These requests should normally come from customers within the network addressing.
To go further, the DNS amplification attacks use spoofed IP addresses. So to counteract this attack, ESDS has configured snort rule in our ids to detect and mitigate this threat:
alert udp $EXTERNAL_NET any -¬?> $HOME_NET 53 (msg:"DNS flooder 1.1 abuse"; sid:20130115; rev:1;\content:"|00 ff 00 01 00 00 29 23 28|"; offset:12;)
The Open source community always provides handy tools for system admins to make their day-to-day operations simple. IPCop is one of such a tool, in fact a Linux distribution that aims to provide easy to manage firewall application. It’s a tasteful firewall built on the Linux netfilter framework, forked from Smoothwall Linux firewall in 2001.
From the setup menu of the firewall of IPCop, it’s possible to configure external access to perform maintenance and administrative tasks on your server from outside, that is to say since WAN and therefore the network called RED for IPCop.
So we can only allow the IP address of a machine access on port 445 through IPCop to our server from the WAN. Allow only one IP address or setting the limit will keep a layer of security in the accesses from the “Dangerous Network” represented by the exterior.
Of course, you can allow multiple connections from any IP address, but we would not recommend you to do so. It is better to be limited to, IP addresses that you can trust.
In this tutorial, we are referring IPCop version 1.4.21, the last stable version, though it has a limited functionality, however, it is flexible enough to allow installation of various addons to enhance it to commercial grade firewalls. This tutorial is also applicable for version 2.0.x as the principle remains the same.
Now, we will configure access for the client on our IPCop by RED interface on port 445 only for access to the administration interface.
To do this, connect the interface to your management console since IPCop to set up the configuration. Log on to the interface using the account “admin”. Through the menu, go to the “Firewall” and then “External Access”.
So you get this form:
Here are some explanations to help you understand:
– TCP / UDP: connection protocol to use, in most cases this will be TCP.
– Source address or IP network (blank for “ALL”): You must indicate to whom you allow access, i.e. with a single IP address, a range of IP addresses (a network) or all IP addresses.
– Destination Port: The port on which you wish to arrive on IPCOP, for example to access the HTTPS administration interface will be indicated 445, 222 for SSH, etc.
– Enabled: If this box is checked and external access configured it means the rule is registered but inactive. So once you click on the Enabled box, next press Add button. It will assign the rule to the next section and you will see listed as an active rule.
– Destination IP Address: This field will keep contents “DEFAULT IP” which is the IP address of your RED interface / RED. This is logical since they will not do a redirect, but configure external access to our IPCop server.
– Note: Enter a comment to describe this rule in order to find your way between all of your external access rules.
It remains only to click on the button “Add” to see the external access rule appears in the list of rules. Finally, test your external access by connecting from the IP address authorized by the RED interface.
Regarding IPCop version 2.0 or greater, it is also possible to access the external access configuration where the configuration is complete. Indeed, it is possible to enable or disable logging for a rule and specify a time range in which the rule applies, interesting anyway!
Today, we will study the interest of centralized logs and especially how to do it for Cisco Systems (router, switch…). You will as well find the procedure from the command line as Cisco to send some or all of your logs to a remote server. Anyway, in this tutorial we are assuming you already have an operational log server.
First, we must understand what can bring the centralization of logs in architecture and specifically for network elements. Centralization may have several benefits, but its main function is to be able to recover a history of events that occurred on a machine or on an entire network when machines are no longer available.
This can be useful, for example, a hacking attempt have driven machine damage or destructed the logs or if equipment fault occurs. The centralization of logs will then allow us to trace the events that led to the unavailability of the machine. On the other hand, centralization of logs may have a goal of control and supervision. One can indeed want to centralize the logs of a set of machines for better monitoring, indexing or the graph in a system as Kibana.
1) Since the configuration is present on the command line, we will start by opening a terminal. Once on it, we will change the mode as enable:
2) It is important to note that the timestamp, i.e. the time that will be exported logs is of particular importance in the system of centralized logs. It makes it possible to accurately trace logs across multiple machines. This is why the first thing to do is put our machine on the right date and at the right time.
clock set 20:11:00 November 25 2015
3) We will then change the mode configuration to set sending logs.
4) We start by activating the timestamp of the logs:
5) Then configures various parameters to the sending of logs, it starts with the remote server IP:
6) Then you can specify the log facility that will allow us, on the remote server, sort the logs, for example:
logging facility local5
Also an important thing to do is to configure the log-level from which one will take care to send the logs. For various reasons such as performance, you may not want to send all logs to the remote server then we will choose to send logs from a certain level of criticality. Generally, there are these log levels:
* 7 – debugging
* 6- informational
* 5 – notifications
* 4 – warnings
* 3 – errors
* 2 – critical
* 1 – alerts
* 0 – emergencies
You will understand the log level “0” is the most critical case and “7” is the most talkative if many logs are produced. As part of the tutorial, we will for instance send logs of 6-0, so we set the value to “informational”:
logging trap informational
Our Cisco system will now begin to send its logs to the remote server. With this configuration we can now summarize by returning mode enable and then entering “show logging”.
Now that we have configured our Cisco router to send logs to the remote log server must be known to set apart these logs. In Rsyslog, system used in the tutorial on the centralization of logs we mentioned above, go the file “/etc/rsyslog.conf” and add the following line to all incoming logs in log-facility 4 are place in a specific file. For example:
Then we restart this service:
service rsyslog restart
We now need to test our export log Cisco, cause of event logging. Then we will see the file configured in Rsyslog to receive files log-facility 4 logs of our Cisco Router.
In the first part we have seen three file managers in Linux Fedora (See Here: http://www.esds.co.in/kb/file-managers-to-try-in-linux-fedora-part-1/). Now, in this part we will see remaining five other options to try.
Here, we will be listing the file managers available in Linux Fedora for simple management of file and folders. In the first part of the post, we have explored three powerful applications such as Dolphin (Default File Manager on Fedora), Midnight Commander and Krusader. Now let’s look at other file managers available for Fedora, some of which reproduce a graphical interface for easy to use.
Konqueror is another powerful file manager for Fedora. One of its main features is that it can also be used as a web browser. Just enter the URL you want to appear in the address bar.
The main difference compared to previous is that the Konqueror file manager provides the ability to open multiple tabs, each of which can contain multiple directories. For example, a window below has been divided into three panels, one on the left and two on the right.
To the left of the pane is a sidebar for easy navigation of the entire file system. The simple and intuitive interface not only facilitates navigation and reorganization of directories, but also makes it easier to find the files for editing and deleting.
Navigation can be done using the traditional keyboard commands, but also with the use of a mouse. Konqueror is also making possible to activate the detailed view of showing the file name, the preview, the last modified date, size, owner and permissions.
Top of the page there is the main menu from which you can access the application configurations. This file manager allows you to set the interface to suit your needs and save it so that it is re-equaled to all restarts.
Nautilus file manager works on a single pane directory. In addition to central management, it has a sidebar for navigation of the file system. Due to its ease of use, it is especially recommended for beginners.
Nautilus is generally in systems with GNOME, but it can also be installed and used with KDE. Unlike the previous file manager, it does not support multiple sheets, but the navigation can be done either by using the keyboard command or with the mouse.
Thunar is a file manger very similar to the Nautilius, both in regards to the graphics and functionality. That’s the reason why there is no need to dwell on this file manager.
PCManFM is intended to replace the file manager Nautilus and Thunar. All three share the same interface, very simple and more or less the same functionality. Due this reason we are not going to dwell on this manager as well.
XFE is one of the most flexible file managers and has an interface very similar to the previous three. It can be configured to display one or two panes and the side navigation bar as an optional.
XFE performs all operations such as drag and drop, but requires a bit of steps to associate files to specific applications such as LibreOffice.
We have seen 8 file managers, all free and licensed under the open source. You can download them directly from the repositories of Fedora or CentOS.
Besides these file managers replicated by opensource.com, there are certainly others. The choice of the program for file management should depend on your needs and therefore not possible to state which file manager is better than another. If you are using file manager that isn’t mentioned here, share it with us in the comment box below.