Introducing Nginx Web Server
Nginx is an Open Source Web Server, launched in 2004 by a Russian engineer. From the beginning, Nginx’s core focus wason high productivity, high concurrency, and less memory usage. The Nginx web server has more features such as load balancing, cache, access control and bandwidth, and the ability to integrate effectively with many applications, which make Nginx the second most popular web server in the world. Many Nginx web hosting and hosting experts consider the fastest web server to be present and, given its free and open source content, it’s possible to imagine a very successful future for Nginx.
It’s interesting to know that WordPress services 33 million sites each month and 3.4 billion web pages and interacts with 339 million users. In fact, WordPress faced an increase of 4.4 times since 2008 and had to migrate to the Nginx web server in order to be able to respond to this volume of requests. It was easy to work with this web server and the flexibility of that WordPress team was surprised! So that they were able to see one of their programs tested, that 10,000 requests per second were answered by a Nginx server.
According to the statistics in 2012, WordPress responds with 70,000 requests per second on 15Gbit / s traffic by 36 NGINX balancers. Nginx currently serves more than 25% of the world’s 1,000 best-selling sites, and more than 70 million websites use Nginx as web servers. For sites such as Netflix, Pinterest, Github, Heroku, etc. They are.
One of the biggest challenges of a website architect has always been the synchronization of requests. Since the launch of Web services, the number of simultaneous requests is increasing. It’s not unusual for a popular website to serve hundreds of thousands or even millions of users simultaneously. In the last decade, the main reason for synchronization was slow connections (users with dial-up or dial-up connections). But today’s rise in synchronization is another reason; a combination of mobile users and newer software architectures that are usually based on maintaining a lasting connection and allowing users to update news and information they receive from their friends. Another important factor that contributes to increasing synchronization is the change in the behavior of new browsers; such browsers make up to four or six simultaneous connections to a website to increase loading speed.
To illustrate the problem of linking slow connections, consider a Apache-based web server that provides a relatively short response of a kilobyte (a web page with text or an image). Loading this page can take a fraction of a second, but for a user with 80 kbps bandwidth, loading this page will take ten seconds. So this web server can download content at a fairly large pace, then send the content to the user for ten seconds before releasing the connection. Now imagine you have a thousand users who are connected at the same time and request the same content. If only 1 megabyte of extra memory is allocated to each user, approximately 1GB of memory will be needed to provide service to only a thousand users with the request for content. A typical, Apache-based web server should allocate more than one megabyte of extra memory per connection.
Despite continuous and stable connections, the problem of synchronization control becomes more evident because to prevent delays caused by the organization of new HTTP connections, users should not be disconnected, and for each connected user, a certain amount of memory is allocated by the web server.
As a result, to control the added workload associated with increasing audience and increasing the number of concurrent users, a website should be based on a number of blocks. However, other parts, such as hardware, network capacity, software, and data storage architecture, are also important, but the web server software that verifies and processes user connections, so the web server should be capable of being non-linear based on increasing the number of requests per second and connection Simultaneous changes.
Is using the ENGINEER BENEFITS other?
High-performance concurrency control with high efficiency and high impact has been the key to the benefits of Nginx, but this web server also has other interesting benefits. Over the past few years, web architectures have come up with the idea of decomposing and segregating their software infrastructure from the web server, as previously found on websites based on Linux, MySQL, PHP, Python or Perl.
The engine is very suitable for the web server because of the key features needed to control synchronization, latency processing, SSL (secure layer of sockets), static content, compression and storage in cache, eliminating unnecessary connections and requests, and even streaming HTTP media from the application layer to Provides a more effective web server layer. It also provides the possibility to integrate directly with non-SQL solutions (or NoSQL like memcached / Redis) to increase the efficiency of the service delivery to a large number of simultaneous users.
With new kits and development programming languages, more companies have changed the way their applications are deployed and deployed, and the engine is one of the key components of this change that has helped many companies to rapidly develop their Web services.
The first line of the Engine X was written in 2002 and became public in 2004. The number of Nginx users is increasing, which has increased collaborative ideas, reported errors, suggestions, and oversight.
The basis of its own code is the code that is written completely and originally written in C programming language. Engine X has infiltrated many architectures and operating systems, such as Linux, Windows, Mac OS X, and FreeBSD. This web server does not make much use of standard C modules with its own libraries.
Although the engine is working in the Windows environment, its Windows version is more display than the proper Web service for Windows. Certain limitations in Nginx and the Windows kernel architecture make the X-ini not work well in all situations. Known issues of the Windows Engineer include issues such as support for fewer simultaneous connections, lower productivity, non-caching, and lack of policies for bandwidth management.
Nginx can be used as a reverse proxy for POP3, SMTP, HTTPS, HTTPT, and IMAP protocols. Nginx can also be used as a Load Balancer for a variety of servers, such as an application server or a mail server. The Nginx web server is available on a variety of platforms, such as WINDOWS, LINUX, UNIX. From the point of view of resources, there is also a good community and there is plenty of content on the Internet about it.
Nginx has a master process and several worker processes. The master process objective is to read and evaluate configuration and maintenance of worker processes. Worker processes perform basic processing on requests. Nginx uses an event-driven and operating system-based model to distribute requests between worker processes. This distribution operation is based on operating system resources because it is operating system-dependent, and applications are never blocked at all. The number of worker processes in the configuration file can be defined and is usually set equal to the number of CPU cores.