The Open source community always provides handy tools for system admins to make their day-to-day operations simple. IPCop is one of such a tool, in fact a Linux distribution that aims to provide easy to manage firewall application. It’s a tasteful firewall built on the Linux netfilter framework, forked from Smoothwall Linux firewall in 2001.
From the setup menu of the firewall of IPCop, it’s possible to configure external access to perform maintenance and administrative tasks on your server from outside, that is to say since WAN and therefore the network called RED for IPCop.
So we can only allow the IP address of a machine access on port 445 through IPCop to our server from the WAN. Allow only one IP address or setting the limit will keep a layer of security in the accesses from the “Dangerous Network” represented by the exterior.
Of course, you can allow multiple connections from any IP address, but we would not recommend you to do so. It is better to be limited to, IP addresses that you can trust.
In this tutorial, we are referring IPCop version 1.4.21, the last stable version, though it has a limited functionality, however, it is flexible enough to allow installation of various addons to enhance it to commercial grade firewalls. This tutorial is also applicable for version 2.0.x as the principle remains the same.
Now, we will configure access for the client on our IPCop by RED interface on port 445 only for access to the administration interface.
To do this, connect the interface to your management console since IPCop to set up the configuration. Log on to the interface using the account “admin”. Through the menu, go to the “Firewall” and then “External Access”.
So you get this form:
Here are some explanations to help you understand:
– TCP / UDP: connection protocol to use, in most cases this will be TCP.
– Source address or IP network (blank for “ALL”): You must indicate to whom you allow access, i.e. with a single IP address, a range of IP addresses (a network) or all IP addresses.
– Destination Port: The port on which you wish to arrive on IPCOP, for example to access the HTTPS administration interface will be indicated 445, 222 for SSH, etc.
– Enabled: If this box is checked and external access configured it means the rule is registered but inactive. So once you click on the Enabled box, next press Add button. It will assign the rule to the next section and you will see listed as an active rule.
– Destination IP Address: This field will keep contents “DEFAULT IP” which is the IP address of your RED interface / RED. This is logical since they will not do a redirect, but configure external access to our IPCop server.
– Note: Enter a comment to describe this rule in order to find your way between all of your external access rules.
It remains only to click on the button “Add” to see the external access rule appears in the list of rules. Finally, test your external access by connecting from the IP address authorized by the RED interface.
Regarding IPCop version 2.0 or greater, it is also possible to access the external access configuration where the configuration is complete. Indeed, it is possible to enable or disable logging for a rule and specify a time range in which the rule applies, interesting anyway!
Today, we will study the interest of centralized logs and especially how to do it for Cisco Systems (router, switch…). You will as well find the procedure from the command line as Cisco to send some or all of your logs to a remote server. Anyway, in this tutorial we are assuming you already have an operational log server.
First, we must understand what can bring the centralization of logs in architecture and specifically for network elements. Centralization may have several benefits, but its main function is to be able to recover a history of events that occurred on a machine or on an entire network when machines are no longer available.
This can be useful, for example, a hacking attempt have driven machine damage or destructed the logs or if equipment fault occurs. The centralization of logs will then allow us to trace the events that led to the unavailability of the machine. On the other hand, centralization of logs may have a goal of control and supervision. One can indeed want to centralize the logs of a set of machines for better monitoring, indexing or the graph in a system as Kibana.
1) Since the configuration is present on the command line, we will start by opening a terminal. Once on it, we will change the mode as enable:
2) It is important to note that the timestamp, i.e. the time that will be exported logs is of particular importance in the system of centralized logs. It makes it possible to accurately trace logs across multiple machines. This is why the first thing to do is put our machine on the right date and at the right time.
clock set 20:11:00 November 25 2015
3) We will then change the mode configuration to set sending logs.
4) We start by activating the timestamp of the logs:
5) Then configures various parameters to the sending of logs, it starts with the remote server IP:
6) Then you can specify the log facility that will allow us, on the remote server, sort the logs, for example:
logging facility local5
Also an important thing to do is to configure the log-level from which one will take care to send the logs. For various reasons such as performance, you may not want to send all logs to the remote server then we will choose to send logs from a certain level of criticality. Generally, there are these log levels:
* 7 – debugging
* 6- informational
* 5 – notifications
* 4 – warnings
* 3 – errors
* 2 – critical
* 1 – alerts
* 0 – emergencies
You will understand the log level “0” is the most critical case and “7” is the most talkative if many logs are produced. As part of the tutorial, we will for instance send logs of 6-0, so we set the value to “informational”:
logging trap informational
Our Cisco system will now begin to send its logs to the remote server. With this configuration we can now summarize by returning mode enable and then entering “show logging”.
Now that we have configured our Cisco router to send logs to the remote log server must be known to set apart these logs. In Rsyslog, system used in the tutorial on the centralization of logs we mentioned above, go the file “/etc/rsyslog.conf” and add the following line to all incoming logs in log-facility 4 are place in a specific file. For example:
Then we restart this service:
service rsyslog restart
We now need to test our export log Cisco, cause of event logging. Then we will see the file configured in Rsyslog to receive files log-facility 4 logs of our Cisco Router.
In the first part we have seen three file managers in Linux Fedora (See Here: http://www.esds.co.in/kb/file-managers-to-try-in-linux-fedora-part-1/). Now, in this part we will see remaining five other options to try.
Here, we will be listing the file managers available in Linux Fedora for simple management of file and folders. In the first part of the post, we have explored three powerful applications such as Dolphin (Default File Manager on Fedora), Midnight Commander and Krusader. Now let’s look at other file managers available for Fedora, some of which reproduce a graphical interface for easy to use.
Konqueror is another powerful file manager for Fedora. One of its main features is that it can also be used as a web browser. Just enter the URL you want to appear in the address bar.
The main difference compared to previous is that the Konqueror file manager provides the ability to open multiple tabs, each of which can contain multiple directories. For example, a window below has been divided into three panels, one on the left and two on the right.
To the left of the pane is a sidebar for easy navigation of the entire file system. The simple and intuitive interface not only facilitates navigation and reorganization of directories, but also makes it easier to find the files for editing and deleting.
Navigation can be done using the traditional keyboard commands, but also with the use of a mouse. Konqueror is also making possible to activate the detailed view of showing the file name, the preview, the last modified date, size, owner and permissions.
Top of the page there is the main menu from which you can access the application configurations. This file manager allows you to set the interface to suit your needs and save it so that it is re-equaled to all restarts.
Nautilus file manager works on a single pane directory. In addition to central management, it has a sidebar for navigation of the file system. Due to its ease of use, it is especially recommended for beginners.
Nautilus is generally in systems with GNOME, but it can also be installed and used with KDE. Unlike the previous file manager, it does not support multiple sheets, but the navigation can be done either by using the keyboard command or with the mouse.
Thunar is a file manger very similar to the Nautilius, both in regards to the graphics and functionality. That’s the reason why there is no need to dwell on this file manager.
PCManFM is intended to replace the file manager Nautilus and Thunar. All three share the same interface, very simple and more or less the same functionality. Due this reason we are not going to dwell on this manager as well.
XFE is one of the most flexible file managers and has an interface very similar to the previous three. It can be configured to display one or two panes and the side navigation bar as an optional.
XFE performs all operations such as drag and drop, but requires a bit of steps to associate files to specific applications such as LibreOffice.
We have seen 8 file managers, all free and licensed under the open source. You can download them directly from the repositories of Fedora or CentOS.
Besides these file managers replicated by opensource.com, there are certainly others. The choice of the program for file management should depend on your needs and therefore not possible to state which file manager is better than another. If you are using file manager that isn’t mentioned here, share it with us in the comment box below.
Management of the files and folders in Linux is easy and simple and for that we have multiple options to select, i.e. to say file manager. Let’s find out applications that you can install on Fedora to manage the task.
One of the operations that are performed frequently by both system administrator and end users on any operating systems is files and folders management. During daily work each user has to carry out operations like identification, classification, elimination and modification of files and folders. For this reason it is always good to rely on simple and streamlined file manager in order to manage files in the most easy and quick way as possible.
Most Linux users are not aware of the wide range of file manager available, and even don’t know the full functionality they offer. For an example, we will take a reference of Fedora – several file managers are available for this OS such as Midnight Commander, Konqueror, Dolphin, Krusader, Nautilus, Thunar, PCManFM and XFE.
Opensource.com has published an article that briefly examines each file manager and compares some of the key features. Here is the review of Opensource.com.
Like most of Linux distributions Fedora also has a default file manager that is currently Dolphin. On the Linux desktop an icon is usually present that represents the home directory tree. Just click on that icon to access the file manager, which is thus started from the position PWD (present working directory). In versions of Fedora using KDE 4.1 or higher, the home icon is located in the Desktop Folder along with Trash Icon.
In KDE the default file manager can be changed in System Settings / Default Application / File Manager.
Midnight Commander is a command line interface – CLI. It is particularly useful when a GUI is not available, but may possibly be used also as a file manager in primary terminal session if you are using a graphical interface. It can be used with any common shell and remote terminals through SSH.
To start Midnight Commander from the CLI, you just need to enter command MC. The user interface is divided into two panels and each shows the contents of a directory. At the top of each pane displays the name of the current directory. Navigation can be done with the arrow keys, Tab and Enter.
The top of the interface displays a menu bar from which you can access the configuration settings of the file manager. The bottom provides information about the file or directory highlighted.
Krusader is a file manager with Linux UI similarities to the previous application. The difference consists in the fact that it is two panels, but non-text graphics. This means that in addition to allowing the keyboard navigation, it also allows the navigation with a mouse or trackball.
Therefore, it has an interface with two panels, each containing two different directories. The detailed view displays over the file icon and the name, the size, date last modified the owner and permissions. Top of the page there is a menu that contains all the configuration items. In the lower part of the page a command line is present.
Krusader automatically saves the positions at the end in order to propose them to restart.
In the second part we will see some more file managers to try. Stay tuned…
Block Storage and Object Storage are two methods of data storage. Let’s see in detail what it is and what services they offer.
It is very important to understand what the Block storage and Object storage is as they play a key role when it comes to optimizing the use of Cloud services, particularly in storage and the creation of local infrastructure projects. See below what it is in the detail.
The block storage is a storage type of data used in environments of Storage Area Network (SAN), which provides data storage volumes in those blocks. Each block is configured as a single hard disk and is configured as a single hard disk and is configured by the administrator. These blocks are controlled by the operating system and are generally called by the Fibre Channel (FC), Fibre Channel over Ethernet (FCoE) or iSCSI protocols.
Because volumes are treated individually as hard drives, the Block Storage works well by storing a variety of applications such as file systems and databases. While storage devices tend to be complex and expensive, the Block Storage tends to be more flexible and provides better performance.
The block storage devices offer a storage capacity fixed. Each volume can be treated as an independent disk drives controlled by an external source. This device can be mounted by the host operating system as if it were a physical disk. The most common examples of Block Storage are San, iSCSI and local disks. They system of Block Storage is the one most commonly used for most applications and can be local or via the network. The devices are usually formatted with a file system FAT32, NTFS, EXT3 and EXT4.
The block storage is ideal for databases as they require high I/O performance and low latency connections feature. It can be used for RAID volumes where multiple disks are combined. Each application that needs processing as Java, PHP and .Net need of Block Storage, ditto for critical applications such as Oracle, SAP, Microsoft Exchange and Microsoft SharePoint.
The object storage is a tool for archiving data between cloud computing.
Object storage and block storage represent the two storage services in the cloud. Above we focused on the Block storage; we have seen how it works and what the solutions available in the Cloud are. Now, we will focus, instead, on the object storage.
The Object Storage is an architecture that manages the data as objects, in contrasts to other storage architecture like file system, which provides the management of data according to a hierarchy of files treated as blocks within individual sectors. Each Object typically includes the data, a variable amount of metadata and a globally unique identifier. The Objet Storage can be implemented in multiple levels and includes the device level, the system level and the interface level.
The data held on devices Object Storage can be accesses directly through the API or http/https, can store any type of data, photos, videos and log files. This type of storage ensures that data is not lost. Data stored in this way can be replicated across multiple data centers and offer simple interface for Web access.
A simple to use the Object Storage may be represented by the use of service as a means of storage for developers who work with large quantities of media and need unlimited storage. The Object Storage, in fact, become gradually more and more attractive as soon as the data to be archived increased dramatically, as happens to with eNlight Cloud Services that allow users to upload media files (especially video).
Storage of unstructured data such as music, pictures and video, but also backup, database dump and log files are the most common cases of use of the Object Storage.