Classified directory archiving: Apache

Using Apache htaccess file to create password protected website privacy directory

In the production environment, there are many different website directory restriction scenarios, which may need to be used Apache htpasswd Tools. I will explain how to use them one by one.

Use first Apache htpasswd Command to create a password file, htpasswd The command usage of is as follows:

 -C # Create a password file. If the file already exists, it will overwrite and delete the original content; -N # Display the password directly without updating the password file; -M # Use MD5 encryption (default); -D # CRYPT encryption (default); -P # Password in plain text format; -S # Use SHA encryption; -B # In the command line, input the user and password together, but interactively, the password clear text can be seen when generating; -D # Delete the specified user;

Create and add a password file with user name: renwole and password: renwole:

 $ htpasswd -c .accpasswd renwole New password: Re-type new password: Adding password for user renwole

Note: Password created .accpasswd The file name can be customized.

Use cat to view the generated content:

 $ cat .accpasswd renwole:$apr1$4owQhqtn$ElCDIh0sfR. ZFzeaY9sDw0

Note: The generated password has been encrypted, so don't confuse it.

Add multiple accounts:

 $ htpasswd -b .accpasswd renwolecom password-renwolecom Adding password for user renwolecom

View multiple generated accounts and passwords:

 $ cat .accpasswd renwole:$apr1$4owQhqtn$ElCDIh0sfR. ZFzeaY9sDw0 renwolecom:$apr1$3zzGmKtR$jKKCbU2nVEQZFz9mtEXE./

Delete user:

 $ htpasswd -D .accpasswd renwolecom Deleting password for user renwolecom

To view the deleted password file:

 $ cat .accpasswd renwole:$apr1$4owQhqtn$ElCDIh0sfR. ZFzeaY9sDw0

Create password protection zone

With the password file, we can use .htaccess File creation protection area.

Save the following as .htaccess File, so that we can use the file to create a protected area.

 $ vim /apps/web/renwolecom/phpMyadmin/.htaccess AuthType Basic AuthName "restricted area" AuthUserFile /usr/local/apache/conf/.accpasswd require valid-user

Put the file in the directory to be protected. So I put it in the root directory of the website phpMyadmin Directory. When you access this directory, a pop-up verification window will appear. Enter the generated user name and password.

Centos 7 Source Code Installation Configuration Apache Production

What is Apache?

Apache is the most popular web server on the Internet. More than half of the world's websites use Apache as a server. It is also an industrial grade Web server.

In this article, I mainly introduce how to install the Apache HTTP server. As long as you follow the steps in this tutorial, the installation will be successful. It should be noted that if you only install Apache, but not php, please delete the contents contained in FilesMatch in the configuration file, otherwise an error will be reported and PHP will not be found.

If you need to install PHP/MySQL database, this tutorial is very suitable for you. For the installation of PHP and MySQL, please read steps 11 and 12.

Note: This tutorial is applicable to the production environment. Apache is PHP based on FPM/FastCGI parsing.

System environment: Centos 7.4 Apache 2.4.29

1. Update the system

 $ yum update && yum upgrade -y

2. Install expansion packs and dependent packs

 $ yum install epel-release -y $ yum install gcc gcc-c++ openssl openssl-devel libtool expat-devel zlib-devel python-devel -y

3. Install pcre

 $ wget  ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.41.tar.gz $ tar zxvf pcre-8.41.tar.gz $ cd pcre-8.41 $ ./ configure $ make -j8 && make install

4. Install nghttp2

 $ wget //github.com/nghttp2/nghttp2/releases/download/v1.27.0/nghttp2-1.27.0.tar.gz $ tar zxvf nghttp2-1.27.0.tar.gz $ cd nghttp2-1.27.0 $ ./ configure $ make -j8 && make install $ echo '/usr/local/lib' > /etc/ld.so.conf.d/local.conf $ ldconfig

5. Install Apache httpd

Create users and groups and download httpd apr apr-util Installation package:

 $ groupadd www $ useradd -g www www $ cd /tmp $ wget //mirrors.shuosc.org/apache//httpd/httpd-2.4.29.tar.gz $ wget //mirrors.tuna.tsinghua.edu.cn/apache//apr/apr-1.6.3.tar.gz $ wget //mirrors.tuna.tsinghua.edu.cn/apache//apr/apr-util-1.6.1.tar.gz

Decompress related packages:

 $ tar xvf httpd-2.4.29.tar.gz $ tar xvf apr-1.6.3.tar.gz $ tar xvf apr-util-1.6.1.tar.gz

take apr-1.6.3 And apr-util-1.6.1 Move to httpd-2.4.29/srclib Directory. The operation is as follows:

 $ cd httpd-2.4.29 $ cp -R ../ apr-1.6.3 srclib/apr $ cp -R ../ apr-util-1.6.1 srclib/apr-util

be careful: No version number is allowed for the move, otherwise the library cannot be found during compilation and installation.

Start installation:

 $ ./ configure \ --prefix=/usr/local/apache \ --enable-mods-shared=most \ --enable-headers \ --enable-mime-magic \ --enable-proxy \ --enable-so \ --enable-rewrite \ --with-ssl \ --with-nghttp2 \ --enable-ssl \ --enable-deflate \ --with-pcre \ --with-included-apr \ --with-apr-util \ --enable-mpms-shared=all \ --with-mpm=prefork \ --enable-remoteip \ --enable-http2 \ --enable-dav \ --enable-expires \ --enable-static-support \ --enable-suexec \ --enable-modules=all \
 $ make -j8 && make install

Installation is complete.

6. Configure httpd.conf

 $ cd /usr/local/apache/conf $ vim httpd.conf

In addition to the default configuration, please cancel the following comments and open related modules:

 LoadModule ext_filter_module modules/mod_ext_filter.so LoadModule deflate_module modules/mod_deflate.so LoadModule expires_module modules/mod_expires.so LoadModule remoteip_module modules/mod_remoteip.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule proxy_ftp_module modules/mod_proxy_ftp.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_fcgi_module modules/mod_proxy_fcgi.so LoadModule proxy_scgi_module modules/mod_proxy_scgi.so LoadModule proxy_wstunnel_module modules/mod_proxy_wstunnel.so LoadModule proxy_ajp_module modules/mod_proxy_ajp.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_express_module modules/mod_proxy_express.so LoadModule slotmem_shm_module modules/mod_slotmem_shm.so LoadModule ssl_module modules/mod_ssl.so LoadModule http2_module modules/mod_http2.so LoadModule lbmethod_byrequests_module modules/mod_lbmethod_byrequests.so LoadModule lbmethod_bytraffic_module modules/mod_lbmethod_bytraffic.so LoadModule lbmethod_bybusyness_module modules/mod_lbmethod_bybusyness.so LoadModule lbmethod_heartbeat_module modules/mod_lbmethod_heartbeat.so LoadModule suexec_module modules/mod_suexec.so LoadModule rewrite_module modules/mod_rewrite.so

Add the following at the end of the module:

 <IfModule http2_module> ProtocolsHonorOrder On Protocols h2 http/1.1 </IfModule>

Modify the following parameter configurations

Set user groups to:

 User www Group www

Default value:

 #ServerName www.example.com:80

Revised as:

 ServerName 0.0.0.0:80

be careful: If you create a virtual host based on a port, add a port in a new line, and then specify the port in the virtual host configuration file.

All:

 AllowOverride None

Revised as:

 AllowOverride All

In the following content location:

 AddType application/x-compress . Z AddType application/x-gzip .gz .tgz

add to:

 AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps

Cancel the following comments and add Apache support for python:

 AddHandler cgi-script .cgi .py

Cancel the following comment:

 Include conf/extra/httpd-mpm.conf Include conf/extra/httpd-vhosts.conf Include conf/extra/httpd-default.conf

The following content is added at the end of the file

Open GZIP:

 <IfModule mod_headers.c> AddOutputFilterByType DEFLATE text/html text/plain text/css text/xml text/javascript <FilesMatch "\.(js|css|html|htm|png|jpg|swf|pdf|shtml|xml|flv|gif|ico|jpeg)$"> RequestHeader edit "If-None-Match" "^(.*)-gzip(.*)$" "$1$2" Header edit "ETag" "^(.*)-gzip(.*)$" "$1$2" </FilesMatch> DeflateCompressionLevel 6 SetOutputFilter DEFLATE </IfModule>

To set Apache security:

 ProtocolsHonorOrder On PidFile /usr/local/apache/logs/httpd.pid ServerTokens ProductOnly ServerSignature Off

To import a virtual host configuration file:

 IncludeOptional conf/vhost/*.conf

7. Create directory and virtual host configuration file

 $ mkdir -p /usr/local/apache/conf/vhost $ mkdir -p /data/apps/web/renwolecom $ vim /usr/local/apache/conf/vhost/renwolecom.conf

Insert the following:

 <VirtualHost *:80> ServerAdmin  webmaster@example.com DocumentRoot "/data/apps/web/renwolecom" ServerName www.renwole.com ServerAlias renwole.com errorDocument 404 /404.html ErrorLog "/usr/local/apache/logs/renwolecom-error_log" CustomLog "/usr/local/apache/logs/renwolecom-access_log" combined <FilesMatch \.php$> SetHandler "proxy:unix:/tmp/php-cgi.sock| fcgi://localhost " </FilesMatch> <Directory "/data/apps/web/renwolecom"> SetOutputFilter DEFLATE Options FollowSymLinks AllowOverride All Require all granted DirectoryIndex index.php index.html index.htm default.php default.html default.htm </Directory> </VirtualHost>

Please modify the above content according to the actual path of your website.

Note: If there is no domain name, please change the domain name binding part to 127.0.0.1 , on http.conf IP+port access can be achieved by adding other ports in. Remember to pass ports through the firewall.

be careful: It is recommended to empty httpd-vhosts.conf Or do not open the reference, or there will be a warning when restarting the httpd service, but the use will not be affected. The main reason is to find the website directory path in the default configuration.

8. Create system unit startup file

 $ vim /usr/lib/systemd/system/httpd.service

Add the following:

 [Unit] Documentation=man:systemd-sysv-generator(8) SourcePath=/usr/local/apache/bin/apachectl Description=LSB: starts Apache Web Server Before=runlevel2.target Before=runlevel3.target Before=runlevel4.target Before=runlevel5.target Before=shutdown.target After=all.target After=network-online.target Conflicts=shutdown.target [Service] Type=forking Restart=no TimeoutSec=5min IgnoreSIGPIPE=no KillMode=process GuessMainPID=no RemainAfterExit=yes ExecStart=/usr/local/apache/bin/apachectl start ExecStop=/usr/local/apache/bin/apachectl stop

9. Add auto start/start/stop/restart

 $ systemctl enable httpd $ systemctl start httpd $ systemctl stop httpd $ systemctl restart httpd

10. Set Firewall

 $ firewall-cmd --permanent --zone=public --add-service=http $ firewall-cmd --permanent --zone=public --add-service=https $ firewall-cmd --reload

Then you can use the domain name or IP to access your website.

11. Deploy PHP

See《 Centos 7 source code compilation and installation php7.1 production

12. Deploy MySQL

See《 Centos 7 binary installation configuration MariaDB (MySQL) database

The installation and configuration of the Apache HTTP server has ended. If any error occurs during the installation and configuration process, please check /usr/local/apache/logs/ The error log below, so that you can quickly solve the problems encountered.

If you have any other suggestions, please leave a message.

Apache Nginx prohibits directory execution of PHP script files

When building a website, we may need to set permissions on some directories separately to achieve the security effect we need. The following example shows how to set a directory under Apache or Nginx to prohibit the execution of php files.

1. Apache configuration

 <Directory /apps/web/renwole/wp-content/uploads> php_flag engine off </Directory> <Directory ~ "^/apps/web/renwole/wp-content/uploads"> <Files ~ ".php"> Order allow,deny Deny from all </Files> </Directory>

2. Nginx configuration

 location /wp-content/uploads { location ~ .*\. (php)?$  { deny all; } }

Nginx prohibits multiple directories from executing PHP:

 location ~* ^/(css|uploads)/.*\. (php)${ deny all; }

After the configuration is completed, reload the configuration file or restart the Apache or Nginx service. After that, all php files accessed through uploads will return 403, greatly increasing the security of the web directory.

Install and configure Kafka Manager distributed management tool

Kafka Manager feature, which supports the following content (official translation):

Manage multiple clusters
Easily check cluster status (subject, consumer, offset, broker, replica distribution, partition allocation)
Run Preferred Replica Election
Use options to generate partition assignments to select agents to use
Reassignment of running partitions (based on generated allocation)
Create a theme with optional theme configuration (0.8.1.1 has a different configuration from 0.8.2+)
Delete theme (only 0.8.2+is supported, and remember to set delete.topic.enable=true in the proxy configuration)
The topic list now represents topics marked for deletion (only 0.8.2+is supported)
Batch generate partition assignments for multiple topics, and select the proxy to use
Batch run partition reallocation of multiple topics
Add a section to an existing theme
Update the configuration of an existing theme
Optionally, enable JMX polling proxy level and topic level metrics.
Optionally filter out consumers without ids/owner/&offset/directory in zookeeper.

requirement:

Kafka 0.8.. or 0.9.. or 0.10..
Java 8+

Installation of Kafka Server:

Install and configure Apche Kafka distributed messaging system cluster on Centos 7

1. Install the sbt tool:

 # curl //bintray.com/sbt/rpm/rpm > bintray-sbt-rpm.repo # mv bintray-sbt-rpm.repo /etc/yum.repos.d/ # yum install sbt -y

2. Build kafka manager package

The generated package will be under kafka manager/target/universal. Kafka manager only needs the Java environment to run, and there is no need to install sbt on the deployed machine.

 # cd /usr/local # git clone //github.com/yahoo/kafka-manager # cd kafka-manager # ./ sbt clean dist #It takes a long time, about 30-60 minutes

Note: kafka manager 1.3.3.22   Click here   Download.

Move:

 # mv target/universal/kafka-manager-1.3.3.13.zip /usr/local/

Decompress&create a soft connection:

 # unzip kafka-manager-1.3.3.13.zip # ln -s kafka-manager-1.3.3.13 kafka-manager

Modify configuration:

 # vim kafka-manager/conf/application.conf kafka-manager.zkhosts="10.10.204.63:2181,10.10.204.64:2181,10.10.204.65:2181"

3. Start kafka manager

After the command is executed, a startup log will appear in the window. The current session will be static and the terminal needs to be reopened. End the current session (Ctrl+c, and kafka manager will exit automatically).

 # kafka-manager/bin/kafka-manager

After startup, you can use IP: 9000 to access.

4. In order to use systemctl to facilitate management, the system unit file is created below (boot automatically):

 # vim /usr/lib/systemd/system/kafka-manager.service [Unit] Description=Redis persistent key-value database After=network.target [Service] User=kafka Group=kafka ExecStart=/usr/local/kafka-manager/bin/kafka-manager -Dconfig.file=/usr/local/kafka-manager/conf/application.conf ExecStop=/usr/local/kafka-manager/bin/kafka-manager stop Restart=always [Install] WantedBy=multi-user.target

Reload the systemctl configuration and add it to the boot auto start:

 # systemctl daemon-reload # systemctl enable kafka-manager # systemctl start kafka-manager

Join the firewall:

 # firewall-cmd --permanent --add-port=9000/tcp # firewall-cmd --reload

be accomplished.

Install and configure Apche Kafka distributed messaging system cluster on Centos 7

Apache Kafka is a popular distributed message broker system, which aims to effectively process a large amount of real-time data. Kafka cluster not only has high scalability and fault tolerance, but also has higher throughput than other message brokers (such as ActiveMQ and RabbitMQ). Although it is usually used as a pub/sub messaging system, many organizations also use it for log aggregation because it provides persistent storage for published messages.

You can deploy Kafka on a server or build a distributed Kafka cluster to improve performance. This article describes how to install Apache Kafka on a multi node CentOS 7 server instance.

precondition:

To install the kafka cluster server, first install the following components:

Linux JAVA JDK JRE environment variable installation and configuration
Install and configure Apache Zookeeper distributed cluster on multiple Linux nodes

Server list:

10.10.204.63
10.10.204.64
10.10.204.65

1. Installation

To create users and groups:

 # groupadd kafka # useradd -g kafka -s /sbin/nologin kafka

Download Kafka package:

 # cd /usr/local # wget //apache.fayea.com/kafka/0.10.2.1/kafka_2.10-0.10.2.1.tgz

Decompress to create a soft connection:

 # tar zxvf kafka_2.10-0.10.2.1.tgz # ln -s kafka_2.10-0.10.2.1 kafka

Set permissions and create Kafka log storage directory:

 # chown -R kafka:kafka kafka_2.10-0.10.2.1 kafka # mkdir -p /usr/local/kafka/logs

Add system variable:

Edit:/etc/profile file, and add the following contents at the bottom:

 export KAFKA_HOME=/usr/local/kafka_2.10-0.10.2.1 export PATH=$KAFKA_HOME/bin:$PATH

Make variable effective:

 # source /etc/profile

2. Configuration

Modify and add the configuration file of Kafka server:

 # cd /usr/local/kafka/config # vim server.properties

#Unique value, which is different for each server.
broker.id=63
#Allow deleting topics.
delete.topic.enable=true
#Modification; Protocol, current broker machine IP, port. Multiple values can be configured, which are related to SSL, etc.
listeners=PLAINTEXT://10.10.204.63:9092
#Modification; The storage address of kafka data. Multiple addresses are separated by commas, such as/data/kafka-logs-1,/data/kafka-logs-2.
log.dirs=/usr/local/kafka/logs/kafka-logs
#The number of partitions for each topic will be overwritten by the specified parameters when the topic is created if it is not specified.
num.partitions=3
#New; Indicates the maximum size of the message body, in bytes.
message.max.bytes=5242880
#New; Whether to allow automatic creation of topics. If it is false, you need to create topics through the command.
default.replication.factor=2
#New; The maximum size of data acquired by replica each time.
replica.fetch.max.bytes=5242880
#New; The following configuration must be used in the configuration file, otherwise it will only be marked for deletion rather than actual deletion.
delete.topic.enable=true
#New; Whether to allow the leader to perform automatic balancing. The boolean value is true by default.
auto.leader.rebalance.enable=true
#The zk address of the kafka connection is consistent with the configuration of each broker.
zookeeper.connect=10.10.204.63:2181,10.10.204.64:2181,10.10.204.65:2181

#Optional configuration
#Whether to allow automatic creation of topic and boolean values. The default value is true.
auto.create.topics.enable=true
#Specify the compression method of topic, string value, optional.
compression.type=high
#All logs will be synchronized to the disk to avoid log recovery after restart and reduce restart time.
controlled.shutdown.enable=true

Note: The configuration file of the broker contains the address of the zookeeper and its own broker ID. When the broker is started, a new znode will be created in the zookeeper.

Modify other configuration files:

 # vim zookeeper.properties

Revised as:

 dataDir=/usr/local/zookeeper/data
 newly added: server.1=10.10.204.63:2888:3888 server.2=10.10.204.64:2888:3888 server.3=10.10.204.65:2888:3888

Modify the following configuration files:

 # vim producer.properties bootstrap.servers=10.10.204.63:9092,10.10.204.64:9092,10.10.204.65:9092 # vim consumer.properties zookeeper.connect=10.10.204.63:2181,10.10.204.64:2181,10.10.204.65:2181

3. Start

Start the kafka service on all nodes (you can check the log or the process status to ensure that the Kafka cluster starts successfully):

 # /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties

After executing the above command, the scrolling startup information will be displayed until the window is still. At this time, the terminal needs to be reopened to check whether the startup is successful;

 # jps 9939 Jps 2201 QuorumPeerMain 2303 Kafka

4. Use test

The following operations can re open a terminal at any node:

Execute the following command to create a file named renwole Topic of.

 # cd /usr/local/kafka/bin # ./ kafka-topics.sh --create --zookeeper 10.10.204.63:2181,10.10.204.64:2181,10.10.204.65:2181 --replication-factor 1 --partitions 1 --topic renwole Created topic " renwole ".

Explanation:

 --Replication factor 1 --Partitions 1 Create a partition --Topic is renwole

To view the created topics:

 # ./ kafka-topics.sh --list --zookeeper 10.10.204.63:2181 _consumer_offsets renwole

Note: You can configure broker to automatically create topics.

Send message (Kafka uses a simple command line producer (you can enter content at will, press Enter to send, and ctrl+c to exit). By default, each command will send a message.):

 # ./ kafka-console-producer.sh --broker-list 10.10.204.64:9092 --topic renwole

On the message receiving end, execute the following command to view the received message:

 # ./ kafka-console-consumer.sh --bootstrap-server 10.10.204.63:9092 --topic renwole --from-beginning

Execute the following command to delete topic:

 # ./ kafka-topics.sh --delete --zookeeper 10.10.204.63:2181,10.10.204.64:2181,10.10.204.65:2181 --topic renwole

5. View cluster status

Kafka has successfully completed the installation. Check the Kafka cluster node ID status:

Note: You can connect the zookeeper client at any node.

 # cd /usr/local/zookeeper/bin # ./ zkCli.sh Connecting to localhost:2181 [myid:] - INFO [main: Environment@100 ] - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT [myid:] - INFO [main: Environment@100 ] - Client environment:host.name=10-10-204-63.10.10.204.63 [myid:] - INFO [main: Environment@100 ] - Client environment:java.version=1.8.0_144 [myid:] - INFO [main: Environment@100 ] - Client environment:java.vendor=Oracle Corporation [myid:] - INFO [main: Environment@100 ] - Client environment:java.home=/usr/java/jdk1.8.0_144/jre [myid:] - INFO [main: Environment@100 ] - Client environment:java.class.path=/usr/local/zookeeper/bin/../ build/classes:/usr/local/zookeeper/bin/../ build/lib/*.jar:/usr/local/zookeeper/bin/../ lib/slf4j-log4j12-1.6.1.jar:/usr/local/zookeeper/bin/../ lib/slf4j-api-1.6.1.jar:/usr/local/zookeeper/bin/../ lib/netty-3.10.5.Final.jar:/usr/local/zookeeper/bin/../ lib/log4j-1.2.16.jar:/usr/local/zookeeper/bin/../ lib/jline-0.9.94.jar:/usr/local/zookeeper/bin/../ zookeeper-3.4.10.jar:/usr/local/zookeeper/bin/../ src/java/lib/*.jar:/usr/local/zookeeper/bin/../ conf:.:/ usr/java/jdk1.8.0_144/jre/lib/rt.jar:/usr/java/jdk1.8.0_144/lib/dt.jar:/usr/java/jdk1.8.0_144/lib/tools.jar:/usr/java/jdk1.8.0_144/jre/lib [myid:] - INFO [main: Environment@100 ] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib [myid:] - INFO [main: Environment@100 ] - Client environment:java.io.tmpdir=/tmp [myid:] - INFO [main: Environment@100 ] - Client environment:java.compiler= [myid:] - INFO [main: Environment@100 ] - Client environment:os.name=Linux [myid:] - INFO [main: Environment@100 ] - Client environment:os.arch=amd64 [myid:] - INFO [main: Environment@100 ] - Client environment:os.version=3.10.0-514.21.2.el7.x86_64 [myid:] - INFO [main: Environment@100 ] - Client environment:user.name=root [myid:] - INFO [main: Environment@100 ] - Client environment:user.home=/root [myid:] - INFO [main: Environment@100 ] - Client environment:user.dir=/usr/local/zookeeper-3.4.10/bin [myid:] - INFO [main: ZooKeeper@438 ] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper. ZooKeeperMain$ MyWatcher@69d0a921 Welcome to ZooKeeper! [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$ SendThread@1032 ] - Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181.  Will not attempt to authenticate using SASL (unknown error) JLine support is enabled [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$ SendThread@876 ] - Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$ SendThread@1299 ] - Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x35ddf80430b0008, negotiated timeout = 30000 WATCHER:: WatchedEvent state:SyncConnected type:None path:null [zk: localhost:2181(CONNECTED) 0] ls /brokers/ids  #View ID [ 63, 64, 65 ]

You can see that the IDs of the three kafka instances are online. If any node is simulated to be offline, the viewing results will be different (this ID number is the broker. id set earlier).

Open port to join firewall:

 # firewall-cmd --permanent --add-port=9092/tcp # firewall-cmd --reload

6. Start up

To create a system unit file:

Create kafka.service in the/usr/lib/systemd/system directory and fill in the following contents:

 [Unit] Description=Apache Kafka server (broker) Documentation=//kafka.apache.org/documentation/ Requires=network.target remote-fs.target After=network.target remote-fs.target [Service] Type=simple Environment="LOG_DIR=/usr/local/kafka/logs" User=kafka Group=kafka #Environment=JAVA_HOME=/usr/java/jdk1.8.0_144 ExecStart=/usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties ExecStop=/usr/local/kafka/bin/kafka-server-stop.sh Restart=on-failure SyslogIdentifier=kafka [Install] WantedBy=multi-user.target

Start the kafka instance server:

 # systemctl daemon-reload # systemctl enable kafka.service # systemctl start kafka.service # systemctl status kafka.service

7. In addition, Kafka can also manage clusters through the Web UI interface. You can refer to:

Install and configure Kafka Manager distributed management tool

Didi, well, so far, the Kafka distributed message queue has been deployed, including necessary optimization information. At present, you can use Kafka client of most programming languages to create Kafka producers and consumers, and easily use them in your projects.

reference material:
//kafka.apache.org/documentation.html#quickstart
//www.ibm.com/developerworks/cn/opensource/os-cn-kafka/
//tech.meituan.com/kafka-fs-design-theory.html
//blog.jobbole.com/99195/

Redisson Cache Tomcat Redis Session Manager Variables

When deploying load balancing, use one Nginx as the front-end server and multiple Tomcat as the back-end server. Nginx will distribute the request to the Tomcat server according to the load policy. The session of the default Tomcat server cannot cross servers. If different requests from the same user are distributed to different Tomcat servers, the session variable will be lost and the user needs to log in again. Before I explain Redisson, I will first analyze the session sharing of the following two schemes.

Scheme 1: Nginx Native Upstream ip_hash

IP_hash is distributed according to the requested IP address. Requests for the same IP address will be forwarded to the same Tomcat server.

1. Nginx configuration is as follows:《 Nginx 1.3 cluster load balancer reverse proxy installation configuration optimization

 upstream webservertomcat { ip_hash; server 10.10.204.63:8023 weight=1 max_fails=2 fail_timeout=2; server 10.10.204.64:8023 weight=1 max_fails=2 fail_timeout=2; #server 127.0.0.1:8080 backup; } server { listen 80; server_name www.myname.com myname.com; location / { proxy_pass //webservertomcat; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; add_header Access-Control-Allow-Origin *; } }

The above production environment is not recommended for two reasons:

one Nginx may not be the front-end server. For example, if Squid or CDN is used as front-end cache, Nginx actually takes the IP address of Squid or CDN server, which cannot achieve the effect of IP diversion according to the client's request.
two Some organizations use dynamic virtual IP or have multiple exit IPs, and the user access will switch the IP, so the request of a user cannot be fixed to the same Tomcat server.

Scheme 2: Nginx_Upstream_jvm_route

Nginx_upstream_jvm_route is a third-party nginx extension module that implements session stickiness through session cookies. If there is no session in cookies and urls, this is just a simple round robin load balancing.

Implementation method: When the user first requests to be distributed to the back-end server; The server ID of the response will be added to the cookie with the name of JSESSIONID; When jvm_route sees the name of the back-end server in the session, it will transfer the request to the corresponding server. Module address://code.google.com/archive/p/nginx upstream jvm route/

1. Nginx configuration is as follows:

 upstream tomcats_jvm_route { server 10.10.204.63:8023 srun_id=tomcat1; server 10.10.204.64:8023 srun_id=tomcat2; jvm_route $cookie_JSESSIONID|sessionid reverse; }

2. Add the following configuration to server.xml of multiple Tomcat servers:

 <Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat1"> <Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat2">

3. After configuration, the server ID will be added at the end of the requested cookie with the name of JSESSIONID, as follows:

 JSESSIONID=33775B80D8E0DB5AEAD61F86D6666C45.tomcat2;

4. It is not recommended to use in the production environment, because:

four point one According to the characteristics of tomcat, when the jvmRoute value is added to the server.xml configuration file, the sessionid will be suffixed with the jvmRoute value. According to this characteristic, nginx upstream_jvm_route will automatically match the sessionId value in each access request with the corresponding server. In this way, every visit will be to the same Tomcat Server, which solves the problem of changing the session of accessing different tomcat nodes. However, there will be a problem in this way. When the Tomcat server that has been accessed is down, the load will distribute users to other servers, which will cause session changes and require re login.

Scheme 3: Tomcat Redis Session Manager

Use Redis to share sessions, which is the focus of this section. The above two schemes store the session in the Tomcat container. When a request is forwarded from one Tomcat to another, the session will become invalid, because the session cannot be shared in each Tomcat. If Redis and other cache database systems are used to store sessions, session sharing can be realized between Tomcat instances.

The Java framework redisson implements the Tomcat Session Manager, which supports Apache Tomcat 6. x, 7. x, and 8. x. The configuration method is as follows:

1. Edit the TOMCAT_BASE/conf/context.xml file node and add the following contents:

 <Manager className="org.redisson.tomcat.RedissonSessionManager"<Manager className="org.redisson.tomcat.RedissonSessionManager"  configPath="${catalina.base}/redisson-config.json" />

Note: ConfigPath – refers to the configuration file path of Redisson in JSON or YAML format. configuration file See here

2. Copy the corresponding * * * two * * * JAR packages to the specified TOMCAT_BASE/lib directory:

Applicable to JDK 1.8+

redisson-all-3.5.0.jar

for Tomcat 6.x

redisson-tomcat-6-3.5.0.jar

for Tomcat 7.x

redisson-tomcat-7-3.5.0.jar

for Tomcat 8.x

redisson-tomcat-8-3.5.0.jar

3. According to the instructions of [Single instance mode] single instance mode, create and edit the redisson-config.json file in Json format, add the following content, upload it to TOMCAT_BASE, and then reload the Tomcat service.

 { "singleServerConfig":{ "idleConnectionTimeout":10000, "pingTimeout":1000, "connectTimeout":10000, "timeout":3000, "retryAttempts":3, "retryInterval":1500, "reconnectionTimeout":3000, "failedAttempts":3, "password":null, "subscriptionsPerConnection":5, "clientName":null, "address": " redis://127.0.0.1:6379 ", "subscriptionConnectionMinimumIdleSize":1, "subscriptionConnectionPoolSize":50, "connectionMinimumIdleSize":10, "connectionPoolSize":64, "database":0, "dnsMonitoring":false, "dnsMonitoringInterval":5000 }, "threads":0, "nettyThreads":0, "codec":null, "useLinuxNativeEpoll":false }

4. Start the test session. Create session.jsp, add the following code, and upload it to TOMCAT_BASE/webapps/ROOT/.

 <%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%> <! DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>shared session</title> </head> <body> <br>session id=<%=session.getId()%> </body> </html>

5. The following session information will appear when the browser accesses

 session id=1D349A69E8F5A27E1C12DEEFC304F0DC

6. Now check whether the value is stored in Redis. If there is a session value, the session value will not change after you restart Tomcat.

 # redis-cli 127.0.0.1:6379> keys * 1) "redisson_tomcat_session:1D349A69E8F5A27E1C12DEEFC304F0DC"

Has been successfully stored.

For Redis cluster or multi instance mode, please refer to the following additional information for configuration:

//github.com/redisson/redisson/wiki/14.-Integration%20with%20frameworks#144-tomcat-redis-session-manager
//github.com/redisson/redisson/wiki/2.-Configuration
//Github. com/redisson/redisson/wiki/2. -% E9% 85% 8D% E7% BD% AE% E6% 96% B9% E6% B3% 95 (Chinese document)