March 2020
M T W T F S S
« Jan    
 1
2345678
9101112131415
16171819202122
23242526272829
3031  

Categories

WordPress Quotes

I've failed over and over and over again in my life and that is why I succeed.
Michael Jordan
March 2020
M T W T F S S
« Jan    
 1
2345678
9101112131415
16171819202122
23242526272829
3031  

Short Cuts

2012 SERVER (64)
2016 windows (9)
AIX (13)
Amazon (40)
Ansibile (19)
Apache (135)
Asterisk (2)
cassandra (2)
Centos (211)
Centos RHEL 7 (270)
centos8 (3)
chef (3)
cloud (2)
cluster (3)
Coherence (1)
DB2 (5)
DISK (25)
DNS (9)
Docker (30)
Eassy (11)
ELKS (1)
EXCHANGE (3)
Fedora (6)
ftp (5)
GIT (3)
GOD (2)
Grub (1)
Hacking (10)
Hadoop (6)
health (2)
horoscope (23)
Hyper-V (10)
IIS (15)
IPTABLES (15)
JAVA (7)
JBOSS (32)
jenkins (1)
Kubernetes (7)
Ldap (5)
Linux (188)
Linux Commands (166)
Load balancer (5)
mariadb (14)
Mongodb (4)
MQ Server (24)
MYSQL (84)
Nagios (5)
NaturalOil (13)
Nginx (35)
Ngix (1)
openldap (1)
Openstack (6)
Oracle (35)
Perl (3)
Postfix (19)
Postgresql (1)
PowerShell (2)
Python (3)
qmail (36)
Redis (12)
RHCE (28)
SCALEIO (1)
Security on Centos (29)
SFTP (1)
Shell (64)
Solaris (58)
Sql Server 2012 (4)
squid (3)
SSH (10)
SSL (14)
Storage (1)
swap (3)
TIPS on Linux (28)
tomcat (62)
Ubuntu (1)
Uncategorized (30)
Veritas (2)
vfabric (1)
VMware (28)
Weblogic (38)
Websphere (71)
Windows (19)
Windows Software (2)
wordpress (1)
ZIMBRA (17)

WP Cumulus Flash tag cloud by Roy Tanck requires Flash Player 9 or better.

Who's Online

7 visitors online now
2 guests, 5 bots, 0 members

Hit Counter provided by dental implants orange county

limits.conf with ansible

[root@localhost ~]# ansible-galaxy init limits.conf
limits.conf was created successfully

[root@localhost ~]# ansible-doc pam_limits

[root@localhost ~]# vim limits.conf/tasks/main.yml

# tasks file for limits.conf
– pam_limits:
domain: “{{ item.domain }}”
limit_type: “{{ item.limit_type }}”
limit_item: “{{ item.limit_item }}”
value: “{{ item.value }}”
with_items: “{{ limits_conf_settings }}”

[root@localhost ~]# vim limits_conf.yml

– hosts: all
roles:
limits.conf
vars:
limits_conf_settings:
– domain: joe
limit_type: soft
limit_item: nofile
value: 64000

[root@localhost ~]# ansible-playbook limits_conf.yml -C

PLAY [all] *************************************************************************************************************************************

TASK [Gathering Facts] *************************************************************************************************************************
ok: [localhost]

TASK [limits.conf : pam_limits] ****************************************************************************************************************
skipping: [localhost] => (item={u’domain’: u’joe’, u’limit_item’: u’nofile’, u’limit_type’: u’soft’, u’value’: 64000})

PLAY RECAP *************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0

[root@localhost ~]# ansible-playbook limits_conf.yml

PLAY [all] *************************************************************************************************************************************

TASK [Gathering Facts] *************************************************************************************************************************
ok: [localhost]

TASK [limits.conf : pam_limits] ****************************************************************************************************************
changed: [localhost] => (item={u’domain’: u’joe’, u’limit_item’: u’nofile’, u’limit_type’: u’soft’, u’value’: 64000})

PLAY RECAP *************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0

[root@localhost ~]# tail -n1 /etc/security/limits.conf
joe soft nofile 64000

[root@localhost ~]# su – joe
Last login: Wed Sep 13 09:05:21 IST 2017 on pts/0
[joe@localhost ~]$ ulimit -Sn
64000

Ansible summary

# An Ansible summary

# Configuration file

[intro\_configuration.html](http://docs.ansible.com/intro_configuration.html)

First one found from of

* Contents of `$ANSIBLE_CONFIG`
* `./ansible.cfg`
* `~/.ansible.cfg`
* `/etc/ansible/ansible.cfg`

Configuration settings can be overridden by environment variables - see
constants.py in the source tree for names.

# Patterns

[intro\_patterns.html](http://docs.ansible.com/intro_patterns.html)

Used on the `ansible` command line, or in playbooks.

* `all` (or `*`)
* hostname: `foo.example.com`
* groupname: `webservers`
* or: `webservers:dbserver`
* exclude: `webserver:!phoenix`
* intersection: `webservers:&staging`

Operators can be chained: `webservers:dbservers:&staging:!phoenix`

Patterns can include variable substitutions: `{{foo}}`, wildcards:
`*.example.com` or 192.168.1.*, and regular expressions:
`~(web|db).*\.example\.com`

# Inventory files

[intro\_inventory.html](http://docs.ansible.com/intro_inventory.html),
[intro\_dynamic\_inventory.html](http://docs.ansible.com/intro_dynamic_inventory.html)

'INI-file' structure, blocks define groups. Hosts allowed in more than
one group. Non-standard SSH port can follow hostname separated by ':'
(but see also `ansible_ssh_port` below).

Hostname ranges: `www[01:50].example.com`, `db-[a:f].example.com`

Per-host variables: `foo.example.com foo=bar baz=wibble`

* `[foo:children]`: new group `foo` containing all members if included groups
* `[foo:vars]`: variable definitions for all members of group `foo`

Inventory file defaults to `/etc/ansible/hosts`. Veritable with `-i`
or in the configuration file. The 'file' can also be a dynamic
inventory script. If a directory, all contained files are processed.

# Variable files: 

[intro\_inventory.html](http://docs.ansible.com/intro_inventory.html)

YAML; given inventory file at `./hosts`:

* `./group_vars/foo`: variable definitions for all members of group `foo`
* `./host_vars/foo.example.com`: variable definitions for foo.example.com

`group_vars` and `host_vars` directories can also exist in the playbook
directory. If both paths exist, variables in the playbook directory
will be loaded second. 

# Behavioral inventory parameters:

[intro\_inventory.html](http://docs.ansible.com/intro_inventory.html)

* `ansible_ssh_host`
* `ansible_ssh_port`
* `ansible_ssh_user`
* `ansible_ssh_pass`
* `ansible_sudo_pass`
* `ansible_connection`
* `ansible_ssh_private_key_file`
* `ansible_python_interpreter`
* `ansible_*_interpreter`

# Playbooks

[playbooks\_intro.html](http://docs.ansible.com/playbooks_intro.html),
[playbooks\_roles.html](http://docs.ansible.com/playbooks_roles.html)

Playbooks are a YAML list of one or more plays. Most (all?) keys are
optional. Lines can be broken on space with continuation lines
indented.

Playbooks consist of a list of one or more 'plays' and/or inclusions:

    ---
    - include: playbook.yml
    - <play>
    - ...

## Plays

[playbooks\_intro.html](http://docs.ansible.com/playbooks_intro.html),
[playbooks\_roles.html](http://docs.ansible.com/playbooks_roles.htm),
[playbooks\_variables.html](http://docs.ansible.com/playbooks_variables.html),
[playbooks\_conditionals.html](http://docs.ansible.com/playbooks_conditionals.html),
[playbooks\_acceleration.html](http://docs.ansible.com/playbooks_acceleration.html),
[playbooks\_delegation.html](http://docs.ansible.com/playbooks_delegation.html),
[playbooks\_prompts.html](http://docs.ansible.com/playbooks_prompts.html),
[playbooks\_tags.html](http://docs.ansible.com/playbooks_tags.htm)
[Forum posting](https://groups.google.com/forum/#!topic/ansible-project/F9mIAfo6orc)
[Forum postinb](https://groups.google.com/forum/#!topic/Ansible-project/MU_ws7zynnI)
    
Plays consist of play metadata and a sequence of task and handler
definitions, and roles.

    - hosts: webservers
      remote_user: root
      sudo: yes
      sudo_user: postgress
      su: yes
      su_user: exim
      gather_facts: no
      accelerate: no
      accelerate_port: 5099
      any_errors_fatal: yes
      max_fail_percentage: 30
      connection: local
      serial: 5
      vars:
        http_port: 80
      vars_files:
        - "vars.yml"
        - [ "try-first.yml", "try-second-.yml" ]
      vars_prompt:
        - name: "my_password2"
          prompt: "Enter password2"
          default: "secret"
          private: yes
          encrypt: "md5_crypt"
          confirm: yes
          salt: 1234
          salt_size: 8
      tags: 
        - stuff
        - nonsence
      pre_tasks:
        - <task>
        - ...
      roles:
        - common
        - { role: common, port: 5000, when: "bar == 'Baz'", tags :[one, two] }
        - { role: common, when: month == 'Jan' }
        - ...
      tasks:
        - include: tasks.yaml
        - include: tasks.yaml foo=bar baz=wibble
        - include: tasks.yaml
          vars:
            foo: aaa 
            baz:
              - z
              - y
        - { include: tasks.yaml, foo: zzz, baz: [a,b]}
        - include: tasks.yaml
          when: day == 'Thursday'
        - <task>
        - ...
      post_tasks:
        - <task>
        - ...
      handlers:
        - include: handlers.yml
        - <task>
        - ...

Using `encrypt` with `vars_prompt` requires that
[Passlib](http://pythonhosted.org/passlib/) is installed.

In addition the source code implies the availability of the following
which don't *seem* to be mentioned in the documentation: `name`, `user` (deprecated), `port`, `accelerate_ipv6`, `role_names`, and `vault_password`.

## Task definitions

[playbooks\_intro.html](http://docs.ansible.com/playbooks_intro.html),
[playbooks\_roles.html](http://docs.ansible.com/playbooks_roles.html),
[playbooks\_async.html](http://docs.ansible.com/playbooks_async.html),
[playbooks\_checkmode.html](http://docs.ansible.com/[playbooks_checkmode.html),
[playbooks\_delegation.html](http://docs.ansible.com/playbooks_delegation.html),
[playbooks\_environment.html](http://docs.ansible.com/playbooks_environment.html),
[playbooks\_error_handling.html](http://docs.ansible.com/playbooks_error_handling.html),
[playbooks\_tags.html](http://docs.ansible.com/playbooks_tags.html)
[ansible-1-5-released](http://www.ansible.com/blog/2014/02/28/ansible-1-5-released)
[Forum posting](https://groups.google.com/forum/#!topic/ansible-project/F9mIAfo6orc)
[Ansible examples](https://github.com/ansible/ansible-examples/blob/master/language_features/complex_args.yml)

Each task definition is a list of items, normally including at least a
name and a module invocation:

    - name: task
      remote_user: apache
      sudo: yes
      sudo_user: postgress
      sudo_pass: wibble
      su: yes
      su_user: exim
      ignore_errors: True
      delegate_to: 127.0.0.1
      async: 45
      poll: 5
      always_run: no
      run_once: false
      meta: flush_handlers
      no_log: true
      environment: <hash>
      environment:
        var1: val1
        var2: val2
      tags: 
        - stuff
        - nonsence
      <module>: src=template.j2 dest=/etc/foo.conf
      action: <module>, src=template.j2 dest=/etc/foo.conf
      action: <module>
      args:
          src=template.j2
          dest=/etc/foo.conf
      local_action: <module> /usr/bin/take_out_of_pool {{ inventory_hostname }}
      when: ansible_os_family == "Debian"
      register: result
      failed_when: "'FAILED' in result.stderr"
      changed_when: result.rc != 2
      notify:
        - restart apache

`delegate_to: 127.0.0.1` is implied by `local_action:`

The forms `<module>: <args>`, `action: <module> <args>`, and `local_action: <module> <args>` are mutually-exclusive. 

Additional keys `when_*`, `until`, `retries` and `delay` are documented below under 'Loops'.

In addition the source code implies the availability of the following
which don't *seem* to be mentioned in the documentation: 
`first_available_file` (deprecated), `transport`, 
`connection`, `any_errors_fatal`.

# Roles

[playbooks\_roles.html](http://docs.ansible.com/playbooks_roles.html)

Directory structure:

    playbook.yml
    roles/
       common/
         tasks/
           main.yml
         handlers/
           main.yml
         vars/
           main.yml
         meta/
           main.yml
         defaults/
           main.yml
         files/
         templates/
         library/

# Modules

[modules.htm](http://docs.ansible.com/modules.htm),
[modules\_by\_category.html](http://docs.ansible.com/modules_by_category.html)

List all installed modules with

    ansible-doc --list

Document a particular module with

    ansible-doc <module>

Show playbook snippet for specified module

    ansible-doc -i <module>

# Variables

[playbooks\_roles.html](http://docs.ansible.com/playbooks_roles.html),
[playbooks\_variables.html](http://docs.ansible.com/playbooks_variables.html)

Names: letters, digits, underscores; starting with a letter.

## Substitution examples: 

* `{{ var }}`
* `{{ var["key1"]["key2"]}}`
* `{{ var.key1.key2 }}`
* `{{ list[0] }}`

YAML requires an item starting with a variable substitution to be quoted.

## Sources: 

* Highest priority:
    * `--extra-vars` on the command line
* General:
    * `vars` component of a playbook
    * From files referenced by `vars_file` in a playbook
    * From included files (incl. roles)
    * Parameters passed to includes
    * `register:` in tasks
* Lower priority:
    * Inventory (set on host or group)
* Lower priority:
    * Facts (see below)
    * Any `/etc/ansible/facts.d/filename.fact` on managed machines 
      (sets variables with `ansible_local.filename. prefix)
* Lowest priority
    * Role defaults (from defaults/main.yml)

## Built-in:

* `hostvars` (e.g. `hostvars[other.example.com][...]`)
* `group_names` (groups containing current host)
* `groups` (all groups and hosts in the inventory)
* `inventory_hostname` (current host as in inventory)
* `inventory_hostname_short` (first component of inventory_hostname)
* `play_hosts` (hostnames in scope for current play)
* `inventory_dir` (location of the inventory)
* `inventoty_file` (name of the inventory)

## Facts:

Run `ansible hostname -m setup`, but in particular:

* `ansible_distribution`
* `ansible_distribution_release`
* `ansible_distribution_version`
* `ansible_fqdn`
* `ansible_hostname`
* `ansible_os_family`
* `ansible_pkg_mgr`
* `ansible_default_ipv4.address`
* `ansible_default_ipv6.address`

## Content of 'registered' variables:

[playbooks\_conditionals.html](http://docs.ansible.com/playbooks_conditionals.html),
[playbooks\_loops.html](http://docs.ansible.com/playbooks_loops.html)

Depends on module. Typically includes:

* `.rc`
* `.stdout`
* `.stdout_lines`
* `.changed`
* `.msg` (following failure)
* `.results` (when used in a loop)

See also `failed`, `changed`, etc filters.

When used in a loop the `result` element is a list containing all
responses from the module.

## Additionally available in templates:

* `ansible_managed`: string containing the information below
* `template_host`: node name of the templateâ??s machine
* `template_uid`: the owner
* `template_path`: absolute path of the template
* `template_fullpath`: the absolute path of the template
* `template_run_date`: the date that the template was rendered

# Filters

[playbooks\_variables.html](http://docs.ansible.com/playbooks_variables.html)

* `{{ var | to_nice_json }}`
* `{{ var | to_json }}`
* `{{ var | from_json }}`
* `{{ var | to_nice_yml }}`
* `{{ var | to_yml }}`
* `{{ var | from_yml }}`
* `{{ result | failed }}`
* `{{ result | changed }}`
* `{{ result | success }}`
* `{{ result | skipped }}`
* `{{ var | manditory }}`
* `{{ var | default(5) }}`
* `{{ list1 | unique }}`
* `{{ list1 | union(list2) }}`
* `{{ list1 | intersect(list2) }}`
* `{{ list1 | difference(list2) }}`
* `{{ list1 | symmetric_difference(list2) }}`
* `{{ ver1 | version_compare(ver2, operator='>=', strict=True }}`
* `{{ list | random }}`
* `{{ number | random }}`
* `{{ number | random(start=1, step=10) }}`
* `{{ list | join(" ") }}`
* `{{ path | basename }}`
* `{{ path | dirname }}`
* `{{ path | expanduser }}`
* `{{ path | realpath }}`
* `{{ var | b64decode }}`
* `{{ var | b64encode }}`
* `{{ filename | md5 }}`
* `{{ var | bool }}`
* `{{ var | int }}`
* `{{ var | quote }}`
* `{{ var | md5 }}`
* `{{ var | fileglob }}`
* `{{ var | match }}`
* `{{ var | search }}`
* `{{ var | regex }}`
* `{{ var | regexp_replace('from', 'to' )}}`

See also [default jinja2
filters](http://jinja.pocoo.org/docs/templates/#builtin-filters). In
YAML, values starting `{` must be quoted.

# Lookups

[playbooks\_lookups.html](http://docs.ansible.com/playbooks_lookups.html)

Lookups are evaluated on the control machine. 

* `{{ lookup('file', '/etc/foo.txt') }}`
* `{{ lookup('password', '/tmp/passwordfile length=20 chars=ascii_letters,digits') }}`
* `{{ lookup('env','HOME') }}`
* `{{ lookup('pipe','date') }}`
* `{{ lookup('redis_kv', 'redis://localhost:6379,somekey') }}`
* `{{ lookup('dnstxt', 'example.com') }}`
* `{{ lookup('template', './some_template.j2') }}`

Lookups can be assigned to variables and will be evaluated each time
the variable is used.

Lookup plugins also support loop iteration (see below).

# Conditions

[playbooks\_conditionals.html](http://docs.ansible.com/playbooks_conditionals.html)

`when: <condition>`, where condition is:

* `var == "Vaue"`, `var >= 5`, etc.
* `var`, where `var` coreces to boolean (yes, true, True, TRUE)
* `var is defined`, `var is not defined`
* `<condition1> and <condition2>` (also `or`?)

Combined with `with_items`, the when statement is processed for each item.

`when` can also be applied to includes and roles. Conditional Imports
and variable substitution in file and template names can avoid the
need for explicit conditionals.

# Loops

[playbooks\_loops.html](http://docs.ansible.com/playbooks_loops.html)

In addition the source code implies the availability of the following
which don't *seem* to be mentioned in the documentation: `csvfile`, `etcd`, `inventory_hostname`. 

## Standard:

    - user: name={{ item }} state=present groups=wheel
      with_items:
        - testuser1
        - testuser2
       
    - name: add several users
      user: name={{ item.name }} state=present groups={{ item.groups }}
      with_items:
        - { name: 'testuser1', groups: 'wheel' }
        - { name: 'testuser2', groups: 'root' }

      with_items: somelist
    
## Nested:

    - mysql_user: name={{ item[0] }} priv={{ item[1] }}.*:ALL                
                               append_privs=yes password=foo
      with_nested:
        - [ 'alice', 'bob', 'eve' ]
        - [ 'clientdb', 'employeedb', 'providerdb' ]
        
## Over hashes:

Given

    ---
    users:
      alice:
        name: Alice Appleworth
        telephone: 123-456-7890
      bob:
        name: Bob Bananarama
        telephone: 987-654-3210
        
    tasks:
      - name: Print phone records
        debug: msg="User {{ item.key }} is {{ item.value.name }} 
                         ({{ item.value.telephone }})"
        with_dict: users

## Fileglob:

    - copy: src={{ item }} dest=/etc/fooapp/ owner=root mode=600
      with_fileglob:
        - /playbooks/files/fooapp/*

In a role, relative paths resolve relative to the
`roles/<rolename>/files` directory.

## With content of file:

(see example for `authorized_key` module)

    - authorized_key: user=deploy key="{{ item }}"
      with_file:
        - public_keys/doe-jane
        - public_keys/doe-john

See also the `file` lookup when the content of a file is needed.

## Parallel sets of data:

Given

    ---
    alpha: [ 'a', 'b', 'c', 'd' ]
    numbers:  [ 1, 2, 3, 4 ]
    
    - debug: msg="{{ item.0 }} and {{ item.1 }}"
      with_together:
        - alpha
        - numbers

## Subelements:

Given

    ---
    users:
      - name: alice
        authorized:
          - /tmp/alice/onekey.pub
          - /tmp/alice/twokey.pub
      - name: bob
        authorized:
          - /tmp/bob/id_rsa.pub
    
    - authorized_key: "user={{ item.0.name }} 
                       key='{{ lookup('file', item.1) }}'"
      with_subelements:
         - users
         - authorized
         
## Integer sequence:

Decimal, hexadecimal (0x3f8) or octal (0600)

    - user: name={{ item }} state=present groups=evens
      with_sequence: start=0 end=32 format=testuser%02x
          
      with_sequence: start=4 end=16 stride=2
          
      with_sequence: count=4
          
## Random choice:

    - debug: msg={{ item }}
      with_random_choice:
         - "go through the door"
         - "drink from the goblet"
         - "press the red button"
         - "do nothing"
         
## Do-Until:

    - action: shell /usr/bin/foo
      register: result
      until: result.stdout.find("allems go") != -1
      retries: 5
      delay: 10

## Results of a local program:

    - name: Example of looping over a command result
      shell: /usr/bin/frobnicate {{ item }}
      with_lines: /usr/bin/frobnications_per_host 
                           --param {{ inventory_hostname }}
                           
To loop over the results of a remote program, use `register: result`
and then `with_items: result.stdout_lines` in a subsequent
task.
                           
## Indexed list:

    - name: indexed loop demo
      debug: msg="at array position {{ item.0 }} there is 
                                         a value {{ item.1 }}"
      with_indexed_items: some_list
      
## Flattened list:

    ---
    # file: roles/foo/vars/main.yml
    packages_base:
      - [ 'foo-package', 'bar-package' ]
    packages_apps:
      - [ ['one-package', 'two-package' ]]
      - [ ['red-package'], ['blue-package']]
      
    - name: flattened loop demo
      yum: name={{ item }} state=installed
      with_flattened:
        - packages_base
        - packages_apps      

## First found:

    - name: template a file
      template: src={{ item }} dest=/etc/myapp/foo.conf
      with_first_found:
        - files:
            - {{ ansible_distribution }}.conf
            - default.conf
          paths:
             - search_location_one/somedir/
             - /opt/other_location/somedir/
            
# Tags

Both plays and tasks support a `tags:` attribute.

    - template: src=templates/src.j2 dest=/etc/foo.conf
      tags:
        - configuration

Tags can be applied to roles and includes (effectively tagging all
included tasks)
         
    roles:
        - { role: webserver, port: 5000, tags: [ 'web', 'foo' ] }

    - include: foo.yml tags=web,foo
    
To select by tag:

    ansible-playbook example.yml --tags "configuration,packages"
    ansible-playbook example.yml --skip-tags "notification"

# Command lines

## ansible

    Usage: ansible <host-pattern> [options]

    Options:
      -a MODULE_ARGS, --args=MODULE_ARGS
                            module arguments
      -k, --ask-pass        ask for SSH password
      --ask-su-pass         ask for su password
      -K, --ask-sudo-pass   ask for sudo password
      --ask-vault-pass      ask for vault password
      -B SECONDS, --background=SECONDS
                            run asynchronously, failing after X seconds
                            (default=N/A)
      -C, --check           don't make any changes; instead, try to predict some
                            of the changes that may occur
      -c CONNECTION, --connection=CONNECTION
                            connection type to use (default=smart)
      -f FORKS, --forks=FORKS
                            specify number of parallel processes to use
                            (default=5)
      -h, --help            show this help message and exit
      -i INVENTORY, --inventory-file=INVENTORY
                            specify inventory host file
                            (default=/etc/ansible/hosts)
      -l SUBSET, --limit=SUBSET
                            further limit selected hosts to an additional pattern
      --list-hosts          outputs a list of matching hosts; does not execute
                            anything else
      -m MODULE_NAME, --module-name=MODULE_NAME
                            module name to execute (default=command)
      -M MODULE_PATH, --module-path=MODULE_PATH
                            specify path(s) to module library
                            (default=/usr/share/ansible)
      -o, --one-line        condense output
      -P POLL_INTERVAL, --poll=POLL_INTERVAL
                            set the poll interval if using -B (default=15)
      --private-key=PRIVATE_KEY_FILE
                            use this file to authenticate the connection
      -S, --su              run operations with su
      -R SU_USER, --su-user=SU_USER
                            run operations with su as this user (default=root)
      -s, --sudo            run operations with sudo (nopasswd)
      -U SUDO_USER, --sudo-user=SUDO_USER
                            desired sudo user (default=root)
      -T TIMEOUT, --timeout=TIMEOUT
                            override the SSH timeout in seconds (default=10)
      -t TREE, --tree=TREE  log output to this directory
      -u REMOTE_USER, --user=REMOTE_USER
                            connect as this user (default=jw35)
      --vault-password-file=VAULT_PASSWORD_FILE
                            vault password file
      -v, --verbose         verbose mode (-vvv for more, -vvvv to enable
                            connection debugging)
      --version             show program's version number and exit

##  ansible-playbook

    Usage: ansible-playbook playbook.yml

    Options:
      -k, --ask-pass        ask for SSH password
      --ask-su-pass         ask for su password
      -K, --ask-sudo-pass   ask for sudo password
      --ask-vault-pass      ask for vault password
      -C, --check           don't make any changes; instead, try to predict some
                            of the changes that may occur
      -c CONNECTION, --connection=CONNECTION
                            connection type to use (default=smart)
      -D, --diff            when changing (small) files and templates, show the
                            differences in those files; works great with --check
      -e EXTRA_VARS, --extra-vars=EXTRA_VARS
                            set additional variables as key=value or YAML/JSON
      -f FORKS, --forks=FORKS
                            specify number of parallel processes to use
                            (default=5)
      -h, --help            show this help message and exit
      -i INVENTORY, --inventory-file=INVENTORY
                            specify inventory host file
                            (default=/etc/ansible/hosts)
      -l SUBSET, --limit=SUBSET
                            further limit selected hosts to an additional pattern
      --list-hosts          outputs a list of matching hosts; does not execute
                            anything else
      --list-tasks          list all tasks that would be executed
      -M MODULE_PATH, --module-path=MODULE_PATH
                            specify path(s) to module library
                            (default=/usr/share/ansible)
      --private-key=PRIVATE_KEY_FILE
                            use this file to authenticate the connection
      --skip-tags=SKIP_TAGS
                            only run plays and tasks whose tags do not match these
                            values
      --start-at-task=START_AT
                            start the playbook at the task matching this name
      --step                one-step-at-a-time: confirm each task before running
      -S, --su              run operations with su
      -R SU_USER, --su-user=SU_USER
                            run operations with su as this user (default=root)
      -s, --sudo            run operations with sudo (nopasswd)
      -U SUDO_USER, --sudo-user=SUDO_USER
                            desired sudo user (default=root)
      --syntax-check        perform a syntax check on the playbook, but do not
                            execute it
      -t TAGS, --tags=TAGS  only run plays and tasks tagged with these values
      -T TIMEOUT, --timeout=TIMEOUT
                            override the SSH timeout in seconds (default=10)
      -u REMOTE_USER, --user=REMOTE_USER
                            connect as this user (default=jw35)
      --vault-password-file=VAULT_PASSWORD_FILE
                            vault password file
      -v, --verbose         verbose mode (-vvv for more, -vvvv to enable
                            connection debugging)
      --version             show program's version number and exit

## ansible-vault


playbooks_vault.html

    Usage: ansible-vault [create|decrypt|edit|encrypt|rekey] [--help] [options] file_name

    Options:
      -h, --help  show this help message and exit

    See 'ansible-vault <command> --help' for more information on a specific command.

## ansible-doc

    Usage: ansible-doc [options] [module...]

    Show Ansible module documentation

    Options:
      --version             show program's version number and exit
      -h, --help            show this help message and exit
      -M MODULE_PATH, --module-path=MODULE_PATH
                                 Ansible modules/ directory
      -l, --list            List available modules
      -s, --snippet         Show playbook snippet for specified module(s)
      -v                    Show version number and exit
   
## ansible-galaxy

    Usage: ansible-galaxy [init|info|install|list|remove] [--help] [options] ...

    Options:
      -h, --help  show this help message and exit

      See 'ansible-galaxy <command> --help' for more information on a
      specific command 

## ansible-pull

    Usage: ansible-pull [options] [playbook.yml]

    ansible-pull: error: URL for repository not specified, use -h for help

how to use sysctl with ansible

[root@localhost ~]# sysctl -a |grep vm.swappiness
vm.swappiness = 30

[root@localhost ~]# ansible-galaxy init sysctl
– sysctl was created successfully

[root@localhost ~]# ansible-doc sysctl

[root@localhost ~]# vim test.yml

– hosts: localhost
roles:
– sysctl
vars:
sysctl_settings:
– name: vm.swappiness
value: 90

[root@localhost ~]# vim sysctl/tasks/main.yml

# tasks file for sysctl
– name: sysctl settings
sysctl:
name: “{{ item.name }}”
value: “{{ item.value }}”
reload: true
state: “{{ item.state | default(‘present’) }}”
with_items: “{{ sysctl_settings }}”

[root@localhost ~]# ansible-playbook test.yml

PLAY [localhost] *******************************************************************************************************************************

TASK [Gathering Facts] *************************************************************************************************************************
ok: [localhost]

TASK [sysctl : sysctl settings] ****************************************************************************************************************
changed: [localhost] => (item={u’state’: u’present’, u’name’: u’vm.swappiness’, u’value’: 90})

PLAY RECAP *************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0

[root@localhost ~]# sysctl -a |grep vm.swappiness
vm.swappiness = 90

ansible-playbooks one of the following is required name list

Ansible implementation of one of the following is required: name,list

Perform playbooks

[root@controller playbook]# ansible-playbook package.yaml

Error message

[root@controller playbook]# ansible-playbook package.yaml
[WARNING]: While constructing a mapping from /root/ansible/playbook/package.yaml, line 11, column 5, found a duplicate dict key (name). Using last defined value only.

[WARNING]: Ignoring invalid attribute: state

PLAY [app] ****************************************************************************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ****************************************************************************************************************************************************************************************************************************************************
ok: [node2.rmohan.com]

TASK [Upgrade all packages] ***********************************************************************************************************************************************************************************************************************************************
ok: [node2.rmohan.com]

TASK [install epel-release] ***********************************************************************************************************************************************************************************************************************************************
skipping: [node2.rmohan.com]

TASK [{{ item }}] *********************************************************************************************************************************************************************************************************************************************************
failed: [node2.rmohan.com] (item=libselinux-python) => {“changed”: false, “item”: “libselinux-python”, “msg”: “one of the following is required: name,list”}
failed: [node2.rmohan.com] (item=docker-python) => {“changed”: false, “item”: “docker-python”, “msg”: “one of the following is required: name,list”}
failed: [node2.rmohan.com] (item=python-yaml) => {“changed”: false, “item”: “python-yaml”, “msg”: “one of the following is required: name,list”}
failed: [node2.rmohan.com] (item=net-tools) => {“changed”: false, “item”: “net-tools”, “msg”: “one of the following is required: name,list”}
failed: [node2.rmohan.com] (item=nfs-utils) => {“changed”: false, “item”: “nfs-utils”, “msg”: “one of the following is required: name,list”}
failed: [node2.rmohan.com] (item=mc) => {“changed”: false, “item”: “mc”, “msg”: “one of the following is required: name,list”}
failed: [node2.rmohan.com] (item=vim) => {“changed”: false, “item”: “vim”, “msg”: “one of the following is required: name,list”}
failed: [node2.rmohan.com] (item=wget) => {“changed”: false, “item”: “wget”, “msg”: “one of the following is required: name,list”}
failed: [node2.rmohan.com] (item=git) => {“changed”: false, “item”: “git”, “msg”: “one of the following is required: name,list”}
failed: [node2.rmohan.com] (item=ntp) => {“changed”: false, “item”: “ntp”, “msg”: “one of the following is required: name,list”}
failed: [node2.rmohan.com] (item=telnet) => {“changed”: false, “item”: “telnet”, “msg”: “one of the following is required: name,list”}
failed: [node2.rmohan.com] (item=mtr) => {“changed”: false, “item”: “mtr”, “msg”: “one of the following is required: name,list”}
failed: [node2.rmohan.com] (item=htop) => {“changed”: false, “item”: “htop”, “msg”: “one of the following is required: name,list”}
failed: [node2.rmohan.com] (item=iotop) => {“changed”: false, “item”: “iotop”, “msg”: “one of the following is required: name,list”}
failed: [node2.rmohan.com] (item=mailx) => {“changed”: false, “item”: “mailx”, “msg”: “one of the following is required: name,list”}
to retry, use: –limit @/root/ansible/playbook/package.retry

PLAY RECAP ****************************************************************************************************************************************************************************************************************************************************************
node2.rmohan.com : ok=2 changed=0 unreachable=0 failed=1

Playbooks written

– name: Install zabbix agent
yum: name={{item}} state=present
with_items:
– libselinux-python
– docker-python
– python-yaml
– net-tools
– nfs-utils
– mc
– vim
– wget
– git
– ntp
– telnet
– mtr
– htop
– iotop
– mailx
tags: install

Troubleshoot

Carefully looked down and found no spaces, no spaces, no spaces. Change it to something like this

– name: Install zabbix agent
yum: name={{ item }} state=present
– name: Install system packages.
yum: name={{ item }} state=present
with_items:
– libselinux-python
– docker-python
– python-yaml
– net-tools
– nfs-utils
– mc
– vim
– wget
– git
– ntp
– telnet
– mtr
– htop
– iotop
– mailx
tags: install

MYSQL BINARY INSTALL CENTOS7

Environment: Virtual Machine + CentOS 7

1. download binary package, the following mysql-5.7.19-linux-glibc2.12-x86_64.tar.gz link is the official website

cd /usr/local/src

wget https://dev.mysql.com/get/Downloads/MySQL-5.7/mysql-5.7.19-linux-glibc2.12-x86_64.tar.gz
2. extract, rename

[root@beta src]# tar zxvf mysql-5.7.19-linux-glibc2.12-x86_64.tar.gz

[root@beta src]# ls
index.html?id=471614 mysql-5.7.19-linux-glibc2.12-x86_64 mysql-5.7.19-linux-glibc2.12-x86_64.tar.gz
[root@beta src]# mv mysql-5.7.19-linux-glibc2.12-x86_64 /usr/local/mysql
3. Initialize

[root@beta mysql]# useradd -M -s /sbin/nologin mysql

[root@beta mysql]# ls
bin COPYING docs include lib man README share support-files
[root@beta mysql]# mkdir -p /usr/local/mysql/data/mysql
[root@beta mysql]# chown mysql /usr/local/mysql/data/mysql
The following step attention to the last sentence:

[root@beta mysql]# ./bin/mysqld –initialize –user=mysql –datadir=/usr/local/mysql/data/mysql
2017-09-27T03:44:47.999985Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use –explicit_defaults_for_timestamp server option (see documentation for more details).
2017-09-27T03:44:49.011240Z 0 [Warning] InnoDB: New log files created, LSN=45790
2017-09-27T03:44:49.180334Z 0 [Warning] InnoDB: Creating foreign key constraint system tables.
2017-09-27T03:44:49.245777Z 0 [Warning] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: 3649ce8c-a336-11e7-a43f-000c292b2832.
2017-09-27T03:44:49.266053Z 0 [Warning] Gtid table is not ready to be used. Table ‘mysql.gtid_executed’ cannot be opened.
2017-09-27T03:44:49.268172Z 1 [Note] A temporary password is generated for root@localhost: ADB&yGx-d8ab

ADB&yGx-d8ab
Then execute:

[root@beta mysql]# ./bin/mysql_ssl_rsa_setup –datadir=usr/local/mysql/data/mysql
Generating a 2048 bit RSA private key
………………….+++
…+++
writing new private key to ‘ca-key.pem’
—–
Generating a 2048 bit RSA private key
…………………….+++
…………………………………………………………………….+++
writing new private key to ‘server-key.pem’
—–
Generating a 2048 bit RSA private key
………………..+++
…………………..+++
writing new private key to ‘client-key.pem’
4. Copy the configuration file and startup script

First check whether there is /etc/my.cnf, if not

cp support-files/my-default.cnf /etc/my.cnf
Edit /etc/my.cnf, focus on the following changes, the other as far as possible comment out:

basedir = /usr/local/mysql
datadir = //usr/local/mysql/data/mysql
socket = /tmp/mysql.sock
2. Start the script

cp support-files/mysql.server /etc/init.d/mysqld
Edit /etc/init.d/mysqld, only modify the following:

basedir=/usr/local/mysql
datadir=/data/mysql
Add /etc/init.d/mysqld to the startup item:

[root@beta mysql]# chkconfig –add mysqld
[root@beta mysql]# chkconfig –list

systemd ‘systemctl list-unit-files’?
target
systemctl list-dependencies [target]?

5. Start the service

/etc/init.d/mysqld start
6. Set the root password

Log in with the initial password (see step 3 above)

/usr/local/mysql/bin/mysql -uroot -p‘’ #-p?’’
Appears mysql>, enter set password = password (‘new password’);

Exit, login with new password

2. Forget the initial password

To /etc/my.cnf/[mysqld] Add a line below skip-grant-tables, restart mysqld: /etc/init.d/mysqld restart

[mysqld]
skip-grant-tables
basedir=/usr/local/mysql
datadir=/usr/local/mysql/data/mysql
socket=/tmp/mysql.sock

[root@beta ~]# /etc/init.d/mysqld restart
Shutting down MySQL.. SUCCESS!
Starting MySQL.. SUCCESS!
Re-login mysql:

[root@beta ~]# /usr/local/mysql/bin/mysql -uroot
mysql> enter: update mysql.user set authentication_string = password (‘123333’) where user = ‘root’;

mysql> update mysql.user set authentication_string=password(‘123333′) where user=’root’;
Query OK, 1 row affected, 1 warning (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 1
Quit, delete my.cnf added skip-grant-tables, restart mysqld

New password re-login mysql:

[root@beta ~]# /usr/local/mysql/bin/mysql -uroot -p’123333′
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 13
Server version: 5.7.19 MySQL Community Server (GPL)

Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

mysql>

RHEL / CentOS 7 Network Teaming

RHEL / CentOS 7 Network Teaming

Below is an example on how to configure network teaming on RHEL/CentOS 7. It is assumed that you have at least two interface cards.

Show Current Network Interfaces
[root@rhce-server ~]$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 00:0c:29:69:bf:87 brd ff:ff:ff:ff:ff:ff
3: eno33554984: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 00:0c:29:69:bf:91 brd ff:ff:ff:ff:ff:ff

The two devices I will be teaming are eno33554984 and eno16777736.

Create the Team Interface
[root@rhce-server ~]$ nmcli connection add type team con-name team0 ifname team0 config ‘{“runner”: {“name”: “activebackup”}}’

This will configure the interface for activebackup. Other runners include broadcast, roundrobin, loadbalance, and lacp.

Configure team0’s IP Address
[root@rhce-server ~]# nmcli connection modify team0 ipv4.addresses 192.168.1.22/24
[root@rhce-server ~]# nmcli connection modify team0 ipv4.method manual

You can also configure IPv6 address by setting the ipv6.addresses field.

Configure the Team Slaves
[root@rhce-server ~]# nmcli connection add type team-slave con-name team0-slave1 ifname eno33554984 master team0
Connection ‘team0-slave1’ (4167ea50-7d3a-4024-98e1-3058a4dcf0fa) successfully added.
[root@rhce-server ~]# nmcli connection add type team-slave con-name team0-slave2 ifname eno16777736 master team0
Connection ‘team0-slave2’ (d5ed65d1-16a7-4bc7-8c4d-78e17a1ed8b3) successfully added.

Check the Connection
[root@rhce-server ~]# teamdctl team0 state
setup:
runner: activebackup
ports:
eno16777736
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
eno33554984
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
runner:
active port: eno16777736

[root@rhce-server ~]# ping -I team0 192.168.1.1
PING 192.168.1.1 (192.168.1.1) from 192.168.1.24 team0: 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=1.38 ms

Test Failover
[root@rhce-server ~]# nmcli device disconnect eno16777736
[root@rhce-server ~]# teamdctl team0 state
setup:
runner: activebackup
ports:
eno33554984
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
runner:
active port: eno33554984

oracle rman backup

How to sync standby database which is lagging behind from primary database
Primary Database cluster: cluster1.rmohan.com
Standby Database cluster: cluster2.rmohan.com

Primary Database: prim
Standby database: stand

Database version:11.2.0.1.0

Reason:-
1. Might be due to the network outage between the primary and the standby database leading to the archive
gaps. Data guard would be able to detect the archive gaps automatically and can fetch the missing logs as
soon as the connection is re-established.

2. It could also be due to archive logs getting missed out on the primary database or the archives getting
corrupted and there would be no valid backups.

In such cases where the standby lags far behind from the primary database, incremental backups can be used
as one of the methods to roll forward the physical standby database to have it in sync with the primary database.

At primary database:-
SQL> select status,instance_name,database_role from v$database,v$instance;

STATUS INSTANCE_NAME DATABASE_ROLE
———— —————- —————-
OPEN prim PRIMARY

SQL> select thread#,max(sequence#) from v$archived_log group by thread#;

THREAD# MAX(SEQUENCE#)
———- ————–
1 214

At standby database:-
SQL> select status,instance_name,database_role from v$database,v$instance;

STATUS INSTANCE_NAME DATABASE_ROLE
———— —————- —————-
OPEN stand PHYSICAL STANDBY

SQL> select thread#,max(sequence#) from v$archived_log group by thread#;

THREAD# MAX(SEQUENCE#)
———- ————–
1 42
So we can see the standby database is having archive gap of around (214-42) 172 logs.

Step 1: Take a note of the Current SCN of the Physical Standby Database.
SQL> select current_scn from v$database;

CURRENT_SCN
———–
1022779

Step 2 : Cancel the Managed Recovery Process on the Standby database.
SQL> alter database recover managed standby database cancel;

Database altered.

Step 3: On the Primary database, take the incremental SCN backup from the SCN that is currently recorded on the standby database (1022779)
At primary database:-

RMAN> backup incremental from scn 1022779 database format ‘/tmp/rman_bkp/stnd_backp_%U.bak’;

Starting backup at 28-DEC-14

using channel ORA_DISK_1
backup will be obsolete on date 04-JAN-15
archived logs will not be kept or backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/app/oracle/oradata/prim/system01.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/prim/sysaux01.dbf
input datafile file number=00005 name=/u01/app/oracle/oradata/prim/example01.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/prim/undotbs01.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/prim/users01.dbf
channel ORA_DISK_1: starting piece 1 at 28-DEC-14
channel ORA_DISK_1: finished piece 1 at 28-DEC-14
piece handle=/tmp/rman_bkp/stnd_backp_0cpr8v08_1_1.bak tag=TAG20141228T025048 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25

using channel ORA_DISK_1
backup will be obsolete on date 04-JAN-15
archived logs will not be kept or backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 28-DEC-14
channel ORA_DISK_1: finished piece 1 at 28-DEC-14
piece handle=/tmp/rman_bkp/stnd_backp_0dpr8v12_1_1.bak tag=TAG20141228T025048 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 28-DEC-14

We took the backup inside /tmp/rman_bkp directory and ensure that it contains nothing besides the incremental backups of scn.

Step 4: Take the standby controlfile backup of the Primary database controlfile.

At primary database:

RMAN> backup current controlfile for standby format ‘/tmp/rman_bkp/stnd_%U.ctl’;

Starting backup at 28-DEC-14
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including standby control file in backup set
channel ORA_DISK_1: starting piece 1 at 28-DEC-14
channel ORA_DISK_1: finished piece 1 at 28-DEC-14
piece handle=/tmp/rman_bkp/stnd_0epr8v4e_1_1.ctl tag=TAG20141228T025301 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 28-DEC-14

Starting Control File and SPFILE Autobackup at 28-DEC-14
piece handle=/u01/app/oracle/flash_recovery_area/PRIM/autobackup/2014_12_28/o1_mf_s_867466384_b9y8sr8k_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 28-DEC-14

Step 5: Transfer the backups from the Primary cluster to the Standby cluster.
[oracle@cluster1 ~]$ cd /tmp/rman_bkp/
[oracle@cluster1 rman_bkp]$ ls -ltrh
total 24M
-rw-r—–. 1 oracle oinstall 4.2M Dec 28 02:51 stnd_backp_0cpr8v08_1_1.bak
-rw-r—–. 1 oracle oinstall 9.7M Dec 28 02:51 stnd_backp_0dpr8v12_1_1.bak
-rw-r—–. 1 oracle oinstall 9.7M Dec 28 02:53 stnd_0epr8v4e_1_1.ctl

oracle@cluster1 rman_bkp]$ scp *.* oracle@cluster2:/tmp/rman_bkp/
oracle@cluster2’s password:
stnd_0epr8v4e_1_1.ctl 100% 9856KB 9.6MB/s 00:00
stnd_backp_0cpr8v08_1_1.bak 100% 4296KB 4.2MB/s 00:00
stnd_backp_0dpr8v12_1_1.bak 100% 9856KB 9.6MB/s 00:00

Step 6: On the standby cluster, connect the Standby Database through RMAN and catalog the copied
incremental backups so that the Controlfile of the Standby Database would be aware of these
incremental backups.

At standby database:-

SQL>

[oracle@cluster2 ~]$ rman target /
RMAN> catalog start with ‘/tmp/rman_bkp’;

using target database control file instead of recovery catalog
searching for all files that match the pattern /tmp/rman_bkp

List of Files Unknown to the Database
=====================================
File Name: /tmp/rman_bkp/stnd_0epr8v4e_1_1.ctl
File Name: /tmp/rman_bkp/stnd_backp_0dpr8v12_1_1.bak
File Name: /tmp/rman_bkp/stnd_backp_0cpr8v08_1_1.bak

Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files…
cataloging done

List of Cataloged Files
=======================
File Name: /tmp/rman_bkp/stnd_0epr8v4e_1_1.ctl
File Name: /tmp/rman_bkp/stnd_backp_0dpr8v12_1_1.bak
File Name: /tmp/rman_bkp/stnd_backp_0cpr8v08_1_1.bak

Step 7. Shutdown the database and open it in mount stage for recovery purpose.
SQL> shut immediate;
SQL> startup mount;

Step 8.Now recover the database :-
RMAN> rman target /
RMAN> recover database noredo;

Starting recover at 28-DEC-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=25 device type=DISK
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: /u01/app/oracle/oradata/stand/system01.dbf
destination for restore of datafile 00002: /u01/app/oracle/oradata/stand/sysaux01.dbf
destination for restore of datafile 00003: /u01/app/oracle/oradata/stand/undotbs01.dbf
destination for restore of datafile 00004: /u01/app/oracle/oradata/stand/users01.dbf
destination for restore of datafile 00005: /u01/app/oracle/oradata/stand/example01.dbf
channel ORA_DISK_1: reading from backup piece /tmp/rman_bkp/stnd_backp_0cpr8v08_1_1.bak
channel ORA_DISK_1: piece handle=/tmp/rman_bkp/stnd_backp_0cpr8v08_1_1.bak tag=TAG20141228T025048
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03

Finished recover at 28-DEC-14
exit.

Step 9 : Shutdown the physical standby database, start it in nomount stage and restore the standby controlfile
backup that we had taken from the primary database.

SQL> shut immediate;
SQL> startup nomount;

[oracle@cluster2 rman_bkp]$ rman target /
RMAN> restore standby controlfile from ‘/tmp/rman_bkp/stnd_0epr8v4e_1_1.ctl’;
ecovery Manager: Release 11.2.0.1.0 – Production on Sun Dec 28 03:08:45 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: PRIM (not mounted)

RMAN> restore standby controlfile from ‘/tmp/rman_bkp/stnd_0epr8v4e_1_1.ctl’;

Starting restore at 28-DEC-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=20 device type=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/u01/app/oracle/oradata/stand/stand.ctl
output file name=/u01/app/oracle/flash_recovery_area/stand/stand.ctl
Finished restore at 28-DEC-14

Step 10: Shutdown the standby database and mount the standby database, so that the standby database would
be mounted with the new controlfile that was restored in the previous step.

SQL> shut immediate;
SQL> startup mount;

At standby database:-
SQL> alter database recover managed standby database disconnect from session;

At primary database:-
SQL> select thread#,max(sequence#) from v$archived_log group by thread#;

THREAD# MAX(SEQUENCE#)
———- ————–
1 215

At standby database:-
SQL> select thread#,max(sequence#) from v$archived_log group by thread#;

THREAD# MAX(SEQUENCE#)
———- ————–
1 215

Step 11.Now we will cancel the recovery to open the database
SQL> alter database recover managed standby database cancel;

SQL> alter database open;
Database altered.

SQL> alter database recover managed standby database using current logfile disconnect from session;
Database altered.

SQL> select open_mode from v$database;

OPEN_MODE
——————–
READ ONLY WITH APPLY

Now standby database is in sync with the Primary Database.

centos 7 cluster

[root@clusterserver1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.20 clusterserver1.rmohan.com clusterserver1
192.168.1.21 clusterserver2.rmohan.com clusterserver2
192.168.1.22 clusterserver3.rmohan.com clusterserver3

perl -pi.orig -e ‘s/SELINUX=enforcing/SELINUX=permissive/g’ /etc/selinux/config

setenforce 0

timedatectl status

yum install -y ntp
systemctl enable ntpd ; systemctl start ntpd

run ssh-keygen

[root@clusterserver1 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory ‘/root/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
e4:57:e7:7c:2e:dd:82:9f:d5:c7:57:f9:ef:ce:d5:e0 root@clusterserver1.rmohan.com
The key’s randomart image is:
+–[ RSA 2048]—-+
| |
| |
| . . . |
| o . + .|
| S . +.o|
| . o **|
| . E &|
| . *=|
| oo=|
+—————–+
[root@clusterserver1 ~]#

for i in clusterserver1 clusterserver2 clusterserver3 ; do ssh-copy-id $i; done

/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
root@clusterserver1’s password:
Permission denied, please try again.
root@clusterserver1’s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘clusterserver1′”
and check to make sure that only the key(s) you wanted were added.

The authenticity of host ‘clusterserver2 (192.168.1.21)’ can’t be established.
ECDSA key fingerprint is 43:25:9c:32:53:18:33:a9:25:f7:cd:bb:b0:64:80:fd.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
root@clusterserver2’s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘clusterserver2′”
and check to make sure that only the key(s) you wanted were added.

The authenticity of host ‘clusterserver3 (192.168.1.22)’ can’t be established.
ECDSA key fingerprint is 62:79:b1:c7:9b:de:a3:5e:a4:3d:e0:15:2b:f8:c2:f7.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
root@clusterserver3’s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘clusterserver3′”
and check to make sure that only the key(s) you wanted were added.

yum install iscsi-initiator-utils -y

systemctl enable iscsi
systemctl start iscsi

iscsiadm -m discovery -t sendtargets -p 192.168.1.90:3260

iscsiadm –mode node –targetname iqn.2006-01.com.openfiler:tsn.b01850dab96a –portal 192.168.1.90 –login

iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.b01850dab96a -p 192.168.1.90:3260 -l

Install corosync and pacemaker on the nodes

yum -y install lvm2-cluster corosync pacemaker pcs fenceagents-all

systemctl enable pcsd.service

systemctl start pcsd.service

echo test123 | passwd –stdin hacluster

pcs cluster auth clusterserver1 clusterserver2 clusterserver3

[root@clusterserver1 ~]# pcs cluster auth clusterserver1 clusterserver2 clusterserver3
Username: hacluster
Password:
clusterserver3: Authorized
clusterserver2: Authorized
clusterserver1: Authorized
[root@clusterserver1 ~]#

[root@clusterserver1 ~]# ls -lt /var/lib/pcsd/
total 20
-rw——- 1 root root 250 Jan 4 03:33 tokens
-rw-r–r– 1 root root 1542 Jan 4 03:33 pcs_users.conf
-rwx—— 1 root root 60 Jan 4 03:28 pcsd.cookiesecret
-rwx—— 1 root root 1233 Jan 4 03:28 pcsd.crt
-rwx—— 1 root root 1679 Jan 4 03:28 pcsd.key
[root@clusterserver1 ~]#

pcs cluster setup –name webcluster clusterserver1 clusterserver2 clusterserver3

[root@clusterserver1 ~]# pcs cluster setup –name webcluster clusterserver1 clusterserver2 clusterserver3
Shutting down pacemaker/corosync services…
Redirecting to /bin/systemctl stop pacemaker.service
Redirecting to /bin/systemctl stop corosync.service
Killing any remaining services…
Removing all cluster configuration files…
clusterserver1: Succeeded
clusterserver2: Succeeded
clusterserver3: Succeeded
Synchronizing pcsd certificates on nodes clusterserver1, clusterserver2, clusterserver3…
clusterserver3: Success
clusterserver2: Success
clusterserver1: Success

Restaring pcsd on the nodes in order to reload the certificates…
clusterserver3: Success
clusterserver2: Success
clusterserver1: Success
[root@clusterserver1 ~]#

[root@clusterserver1 ~]# ls /etc/corosync/
corosync.conf corosync.conf.example corosync.conf.example.udpu corosync.xml.example uidgid.d/
[root@clusterserver1 ~]# ls /etc/corosync/
corosync.conf corosync.conf.example corosync.conf.example.udpu corosync.xml.example uidgid.d/
[root@clusterserver1 ~]# ls /etc/corosync/*
/etc/corosync/corosync.conf /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf.example.udpu /etc/corosync/

/etc/corosync/uidgid.d:
[root@clusterserver1 ~]#
[root@clusterserver1 corosync]# cat corosync.conf
totem {
version: 2
secauth: off
cluster_name: webcluster
transport: udpu
}

nodelist {
node {
ring0_addr: clusterserver1
nodeid: 1
}

node {
ring0_addr: clusterserver2
nodeid: 2
}

node {
ring0_addr: clusterserver3
nodeid: 3
}
}

quorum {
provider: corosync_votequorum
}

logging {
to_logfile: yes
logfile: /var/log/cluster/corosync.log
to_syslog: yes
}
[root@clusterserver1 corosync]#

[root@clusterserver2 ~]# pcs status
Error: cluster is not currently running on this node
[root@clusterserver2 ~]#

[root@clusterserver3 ~]# pcs status
Error: cluster is not currently running on this node
[root@clusterserver3 ~]#

pcs cluster enable –all

[root@clusterserver1 corosync]# pcs cluster enable –all
clusterserver1: Cluster Enabled
clusterserver2: Cluster Enabled
clusterserver3: Cluster Enabled
[root@clusterserver1 corosync]#

Start the cluster
•From any node: pcs cluster start –all

pcsd: active/enabled
[root@clusterserver1 corosync]# pcs status
Cluster name: webcluster
WARNING: no stonith devices and stonith-enabled is not false
Last updated: Mon Jan 4 03:39:26 2016 Last change: Mon Jan 4 03:39:24 2016 by hacluster via crmd on clusterserver1
Stack: corosync
Current DC: clusterserver1 (version 1.1.13-10.el7-44eb2dd) – partition with quorum
3 nodes and 0 resources configured

Online: [ clusterserver1 clusterserver2 clusterserver3 ]

Full list of resources:

PCSD Status:
clusterserver1: Online
clusterserver2: Online
clusterserver3: Online

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[root@clusterserver1 corosync]#

Verify Corosync Installation
•corosync-cfgtool -s

[root@clusterserver1 corosync]# corosync-cfgtool -s
Printing ring status.
Local node ID 1
RING ID 0
id = 192.168.1.20
status = ring 0 active with no faults
[root@clusterserver1 corosync]#

[root@clusterserver2 ~]# corosync-cfgtool -s
Printing ring status.
Local node ID 2
RING ID 0
id = 192.168.1.21
status = ring 0 active with no faults
[root@clusterserver2 ~]#

Verify Corosync Installation
•corosync-cmapctl | grep members

[root@clusterserver2 ~]# corosync-cfgtool -s
Printing ring status.
Local node ID 2
RING ID 0
id = 192.168.1.21
status = ring 0 active with no faults
[root@clusterserver2 ~]# corosync-cmapctl | grep members
runtime.totem.pg.mrp.srp.members.1.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.1.ip (str) = r(0) ip(192.168.1.20)
runtime.totem.pg.mrp.srp.members.1.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.1.status (str) = joined
runtime.totem.pg.mrp.srp.members.2.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.2.ip (str) = r(0) ip(192.168.1.21)
runtime.totem.pg.mrp.srp.members.2.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.2.status (str) = joined
runtime.totem.pg.mrp.srp.members.3.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.3.ip (str) = r(0) ip(192.168.1.22)
runtime.totem.pg.mrp.srp.members.3.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.3.status (str) = joined
[root@clusterserver2 ~]#

Verify Corosync Installation
•crm_verify -L -V

[root@clusterserver2 ~]# crm_verify -L -V
error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
Errors found during check: config not valid
[root@clusterserver2 ~]#

nginx is a high performance

nginx is a high performance web server software. It is a much more flexible and lightweight program than apache.

yum install epel-release

yum install nginx

ifconfig eth0 | grep inet | awk ‘{ print $2 }’

wget –no-cookies –no-check-certificate –header “Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie” “http://download.oracle.com/otn-pub/java/jdk/8u60-b27/jdk-8u60-linux-x64.tar.gz”
wget http://mirror.nus.edu.sg/apache/tomcat/tomcat-8/v8.0.30/bin/apache-tomcat-8.0.30.tar.gz
tar xzf jdk-8u40-linux-i586.tar.gz
mkdir /usr/java/

cd /usr/java/jdk1.8.0_40/
[root@cluster1 java]# ln -s /usr/java/jdk1.8.0_40/bin/java /usr/bin/java
[root@cluster1 java]# alternatives –install /usr/java/jdk1.8.0_40/bin/java java /usr/java/jdk1.8.0_40/bin/java 2

alternatives –install /usr/java/jdk1.8.0_40/bin/java java /usr/java/jdk1.8.0_40/bin/java 2
alternatives –config java

vi /etc/profile.d/java.sh
export JAVA_HOME=/usr/java/jdk1.8.0_25
PATH=$JAVA_HOME/bin:$PATH
export PATH=$PATH:$JAVA_HOME
export JRE_HOME=/usr/java/jdk1.8.0_25/jre
export PATH=$PATH:/usr/java/jdk1.8.0_25/bin:/usr/java/jdk1.8.0_25/jre/bin

Three, Tomcat load balancing configuration

When Nginx start loading default configuration file /etc/nginx/nginx.conf, while nginx.conf in references /etc/nginx/conf.d catalog all .conf files.

Therefore, some of their own custom configuration can be written to a separate .conf files, as long as the files are placed /etc/nginx/conf.d this directory can be, and easy maintenance.

Create tomcats.conf: vi /etc/nginx/conf.d/tomcats.conf, which reads as follows:

/usr/tomcat/apache-tomcat-8.0.30/bin/startup.sh

vi /etc/nginx/conf.d/tomcats.conf

upstream tomcats {
ip_hash;
server 192.168.1.60:8080;
server 192.168.1.62:8080;
server 192.168.0.63:8080;
}

Modify default.conf: vi /etc/nginx/conf.d/default.conf, amend as follows:
vi /etc/nginx/conf.d/default.conf
need to amend the below lines
#location / {
# root /usr/share/nginx/html;
# index index.html index.htm;
#}

# new configuration default forwards the request to tomcats. conf configuration upstream processing
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header REMOTE-HOST $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://tomcats;
}

After saving reload the configuration: nginx -s reload

Four separate static resource configuration

Modify default.conf: vi /etc/nginx/conf.d/default.conf, add the following configuration:
vi /etc/nginx/conf.d/default.conf

All js, css requests related static resource files processed by Nginx

location ~.*\.(js|css)$ {
root /opt/static-resources;
expires 12h;
}

Request # All photos and other multimedia-related static resource files is handled by Nginx

location ~.*\.(html|jpg|jpeg|png|bmp|gif|ico|mp3|mid|wma|mp4|swf|flv|rar|zip|txt|doc|ppt|xls|pdf)$ {
root /opt/static-resources;
expires 7d;
}

Create a Directory for the Certificate
mkdir /etc/nginx/ssl
cd /etc/nginx/ssl
openssl genrsa -des3 -out server.key 2048
openssl req -new -key server.key -out server.csr
cp server.key server.key.org
openssl rsa -in server.key.org -out server.key
openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt

server {
listen 80;
listen 443 default ssl;
server_name cluster1.rmohan.com;
keepalive_timeout 70;
# ssl on;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
}

Nginx server security configuration

First, turn off SELinux
Security-Enhanced Linux (SELinux) is a Linux kernel feature that provides security policy protection mechanism supports access control.
However, SELinux brings additional security and the disproportionate use of complexity, cost is not high

sed -i /SELINUX=enforcing/SELINUX=disabled/ /etc/selinux/config

/usr/sbin/sestatus -v # Check status

Second, the least privilege allowed by zoning mount

A separate partition on the server nginx directory.

For example, create a new partition /dev/sda5 (first logical partition), and mounted at /nginx.
Make sure /nginx is noexec,nodev and nosetuid permission to mount

The following is my /etc/fstab mount /nginx information: LABEL=/nginx /nginx ext3 defaults,nosuid,noexec,nodev 1 2

Note: You need to create a new partition using fdisk and mkfs.ext3 command.
Third, to strengthen the Linux security configuration /etc/sysctl.conf

You can control and configure the Linux kernel by editing /etc/sysctl.conf, network settings

# Avoid a smurf attack

net.ipv4.icmp_echo_ignore_broadcasts = 1

# Turn on protection for bad icmp error messages

net.ipv4.icmp_ignore_bogus_error_responses = 1

# Turn on syncookies for SYN flood attack protection

net.ipv4.tcp_syncookies = 1

# Turn on and log spoofed, source routed, and redirect packets

net.ipv4.conf.all.log_martians = 1

net.ipv4.conf.default.log_martians = 1

# No source routed packets here

net.ipv4.conf.all.accept_source_route = 0

net.ipv4.conf.default.accept_source_route = 0

# Turn on reverse path filtering

net.ipv4.conf.all.rp_filter = 1

net.ipv4.conf.default.rp_filter = 1

# Make sure no one can alter the routing tables

net.ipv4.conf.all.accept_redirects = 0

net.ipv4.conf.default.accept_redirects = 0

net.ipv4.conf.all.secure_redirects = 0

net.ipv4.conf.default.secure_redirects = 0

# Don’t act as a router

net.ipv4.ip_forward = 0

net.ipv4.conf.all.send_redirects = 0

net.ipv4.conf.default.send_redirects = 0

# Turn on execshild

kernel.exec-shield = 1

kernel.randomize_va_space = 1

# Tuen IPv6

net.ipv6.conf.default.router_solicitations = 0

net.ipv6.conf.default.accept_ra_rtr_pref = 0

net.ipv6.conf.default.accept_ra_pinfo = 0

net.ipv6.conf.default.accept_ra_defrtr = 0

net.ipv6.conf.default.autoconf = 0

net.ipv6.conf.default.dad_transmits = 0

net.ipv6.conf.default.max_addresses = 1

# Optimization for port usefor LBs

# Increase system file descriptor limit

fs.file-max = 65535

# Allow for more PIDs (to reduce rollover problems); may break some programs 32768

kernel.pid_max = 65536

# Increase system IP port limits

net.ipv4.ip_local_port_range = 2000 65000

# Increase TCP max buffer size setable using setsockopt()

net.ipv4.tcp_rmem = 4096 87380 8388608

net.ipv4.tcp_wmem = 4096 87380 8388608

# Increase Linux auto tuning TCP buffer limits

# min, default, and max number of bytes to use

# set max to at least 4MB, or higher if you use very high BDP paths

# Tcp Windows etc

net.core.rmem_max = 8388608

net.core.wmem_max = 8388608

net.core.netdev_max_backlog = 5000

net.ipv4.tcp_window_scaling = 1

Fourth, remove all unnecessary Nginx module

You need to make the number of modules directly by compiling the source code Nginx minimized. By limiting access to only allow web server module to minimize risk.
You can configure only install nginx modules you need. For example, disabling SSL and autoindex module you can execute the following command:

./configure -without-http_autoindex_module -without-http_ssi_module
make && make install

Change nginx version name, edit the file /h/http/ngx_http_header_filter_module.c?

vim src/http/ngx_http_header_filter_module.c

static char ngx_http_server_string[] = “Server: nginx” CRLF;

static char ngx_http_server_full_string[] = “Server: ” NGINX_VER CRLF;

//change to

static char ngx_http_server_string[] = “Server: Mohan Web Server” CRLF;

static char ngx_http_server_full_string[] = “Server: Mohan Web Server” CRLF;

Close nginx version number display

server_tokens off

Fifth, based Iptables firewall restrictions

The following firewall script block any addition to allowing:

HTTP (TCP port 80) of a request from
ICMP ping requests from
ntp (port 123) requests output
smtp (TCP port 25) request output

Six control buffer overflow attacks

Edit and set all clients buffer size limit is as follows:

client_body_buffer_size 1K;

client_header_buffer_size 1k;

client_max_body_size 1k;

large_client_header_buffers 2 1k;

client_body_buffer_size 1k (default 8k or 16k) This instruction can specify the buffer size of the connection request entity.
If the value exceeds the specified buffer connection request, then the whole or part of the requesting entity will try to write a temporary file.
client_header_buffer_size 1k directive specifies the client request buffer size of the head.
In most cases a request header is not greater than 1k, but if there is a large cookie wap from the client that it may be greater than 1k,
Nginx will assign it a larger buffer, this value can be set inside the large_client_header_buffers .
client_max_body_size 1k- directive specifies the maximum allowable size of the client requesting entity connected, it appears in the Content-Length header field of the request.

If the request is greater than the specified value, the client will receive a “Request Entity Too Large” (413) error. Remember, the browser does not know how to display the error.
large_client_header_buffers- specify the client number and size of some of the larger buffer request header use.
Request a field can not be greater than the buffer size, if the client sends a relatively large head, nginx returns “Request URI too large” (414)
Similarly, the head of the longest field of the request can not be greater than one buffer, otherwise the server will return “Bad request” (400). Separate buffer only when demand.
The default buffer size for the operating system paging file size is usually 4k or 8k, if a connection request is ultimately state to keep- alive, it occupied the buffer will be freed.

You also need to improve server performance control timeouts and disconnects the client. Edit as follows:

client_body_timeout 10;
client_header_timeout 10;
keepalive_timeout 5 5;
send_timeout 10;

• client_body_timeout 10; – directive specifies the timeout request entity read. Here timeout refers to a requesting entity did not enter the reading step, if the connection after this time the client does not have any response, Nginx will return a “Request time out” (408) error.
• client_header_timeout 10; – directive specifies the client request header headline read timeout. Here timeout refers to a request header did not enter the reading step, if the connection after this time the client does not have any response, Nginx will return a “Request time out” (408) error.
• keepalive_timeout 5 5; – the first parameter specifies the timeout length of the client and server connections, over this time, the server will close the connection. The second parameter (optional) specifies the response header Keep-Alive: timeout = time value time, this value can make some browsers know when to close the connection to the server not repeat off if you do not specify this parameter , nginx does not send Keep-Alive header information in the response. (This does not refer to how a connection “Keep-Alive”) These two values ??of the parameters can be different.
• send_timeout 10; directive specifies the timeout is sent to the client after the response, Timeout refers not enter a complete state established, completed only two handshakes, more than this time if the client does not have any response, nginx will close the connection.

Seven control concurrent connections

You can use NginxHttpLimitZone module to restrict a specific session or a special case of concurrent connections IP addresses under. Edit nginx.conf:

### Directive describes the zone, in which the session states are stored i.e. store in slimits. ###

### 1m can handle 32000 sessions with 32 bytes/session, set to 5m x 32000 session ###

limit_zone slimits $binary_remote_addr 5m;

### Control maximum number of simultaneous connections for one session i.e. ###

### restricts the amount of connections from a single ip address ###

limit_conn slimits 5

The above represents the remote IP address to limit each client connection can not be open at the same time more than five.

Eight, only allow access to our domain

If the robot is just random scan all domain name servers, that reject the request. You must allow the configuration of the virtual domain or reverse proxy request. You do not use IP addresses to reject.

if ($host !~ ^(test.in|www.test.in|images.test.in)$ ) {
return 444;
}

Nine, to limit the request method available

GET and POST are the Internet’s most commonly used method. The method of the Web server is defined in RFC 2616. If the Web server is not required to run all available methods, they should be disabled. The following command will filter only allows GET, HEAD and POST methods:

## Only allow these request methods ##

if ($request_method !~ ^(GET|HEAD|POST)$ ) {

return 444;

}

## Do not accept DELETE, SEARCH and other methods ##

More about HTTP method introduced

• GET method is used to request,

• HEAD method is the same, unless GET request to the server can not return the message body.

• POST method can involve many things, such as storage or update data, or ordering products, or send e-mail by submitting the form. This is usually the use of server-side processing, such as PHP, Perl and Python scripts. If the file you want to upload and server processing the data, you must use this method.

Ten, how to refuse a number of User-Agents?

You can easily stop User-Agents, such as scanners, robotics and abuse your server spammers.

## Block download agents ##

if ($http_user_agent ~* LWP::Simple|BBBike|wget) {

return 403;

}

Soso and the proper way to prevent robots:

## Block some robots ##

if ($http_user_agent ~* Sosospider|YodaoBot) {

return 403;

}

XI prevent image hotlinking

Pictures or HTML Daolian mean someone directly with your website address to display pictures on his website. The end result, you need to pay the extra cost of broadband. This is often in the forum and blog. I strongly recommend that you block and prevent hotlinking behavior.

# Stop deep linking or hot linking

location /images/ {

valid_referers none blocked www.example.com example.com;

if ($invalid_referer) {

return 403;

}

}

For example: the redirect and display the specified image

valid_referers blocked www.example.com example.com;

valid_referers blocked www.example.com example.com;

if ($invalid_referer) {

rewrite ^/images/uploads.*\.(gif|jpg|jpeg|png)$ http://www.examples.com/banned.jpg last

}

Twelve, directory restrictions

You can set access permissions on the specified directory. All websites directory should one configuration, allowing only access to the directory.
Access by IP address restrictions
You can restrict access by IP address directory / admin /:

ocation /docs/ {

## block one workstation

deny 192.168.1.1;

## allow anyone in 192.168.1.0/24

allow 192.168.1.0/24;

## drop rest of the world

deny all;

}

Via password protected directory, first create the password file and increase the “user” user

mkdir /usr/local/nginx/conf/.htpasswd/

htpasswd -c /usr/local/nginx/conf/.htpasswd/passwd user

Edit nginx.conf, added need protected directories

### Password Protect /personal-images/ and /delta/ directories ###

location ~ /(personal-images/.*|delta/.*) {

auth_basic “Restricted”;

auth_basic_user_file /usr/local/nginx/conf/.htpasswd/passwd;

}

Once the password file has been generated, you can also use the following command to allow access to the user increases

htpasswd -s /usr/local/nginx/conf/.htpasswd/passwd userName

Thirteen, Nginx SSL Configuration

HTTP is a plain text protocol, which is open to passive surveillance. You should use SSL to encrypt your user content.
Create SSL certificate, execute the following command:

cd /usr/local/nginx/conf

openssl genrsa -des3 -out server.key 1024

openssl req -new -key server.key -out server.csr

cp server.key server.key.org

openssl rsa -in server.key.org -out server.key

openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt

Edit nginx.conf press following updates:

server {

server_name example.com;

listen 443;

ssl on;

ssl_certificate /usr/local/nginx/conf/server.crt;

ssl_certificate_key /usr/local/nginx/conf/server.key;

access_log /usr/local/nginx/logs/ssl.access.log;

error_log /usr/local/nginx/logs/ssl.error.log;

}

Fourteen, Nginx and PHP Security Recommendations

PHP is a popular scripting language on the server side. Edit /etc/php.ini file as follows:

# Disallow dangerous functions

disable_functions = phpinfo, system, mail, exec

## Try to limit resources ##

# Maximum execution time of each script, in seconds

max_execution_time = 30

# Maximum amount of time each script may spend parsing request data

max_input_time = 60

# Maximum amount of memory a script may consume (8MB)

memory_limit = 8M

# Maximum size of POST data that PHP will accept.

post_max_size = 8M

# Whether to allow HTTP file uploads.

file_uploads = Off

# Maximum allowed size for uploaded files.

upload_max_filesize = 2M

# Do not expose PHP error messages to external users

display_errors = Off

# Turn on safe mode

safe_mode = On

# Only allow access to executables in isolated directory

safe_mode_exec_dir = php-required-executables-path

# Limit external access to PHP environment

safe_mode_allowed_env_vars = PHP_

# Restrict PHP information leakage

expose_php = Off

# Log all errors

log_errors = On

# Do not register globals for input data

register_globals = Off

# Minimize allowable PHP post size

post_max_size = 1K

# Ensure PHP redirects appropriately

cgi.force_redirect = 0

# Disallow uploading unless necessary

# Enable SQL safe mode

sql.safe_mode = On

# Avoid Opening remote files

allow_url_fopen = Off

Fifth, if possible, let Nginx run in a chroot jail

The nginx placed in a chroot jail to reduce the potential for illegal entry into other directories. You can use the traditional and nginx installed with chroot. If possible, that use FreeBSD jails, Xen, OpenVZ virtualization container concept.

XVI firewall level limits the number of connections for each IP

Network server must monitor connections and connection limits per second. PF and Iptales are able to enter your nginx server before the end user to block access.
Linux Iptables: limit the number of connections for each Nginx
following example will prevent from a single IP connection of more than 15 the number of ports 80, 60 seconds.

/sbin/iptables -A INPUT -p tcp –dport 80 -i eth0 -m state –state NEW -m recent –set

/sbin/iptables -A INPUT -p tcp –dport 80 -i eth0 -m state –state NEW -m recent –update –seconds 60 –hitcount 15 -j DROP

service iptables save

According to your specific situation to set the connection limit.

XVII configure the operating system to protect Web servers

Like the above described start SELinux Correct set permissions /nginx document root directory.
Nginx running in user nginx. But the root directory (/ nginx or /usr/local/nginx/html/) should not be set, or the user belongs to the user nginx nginx writable.
Find the error file permissions can use the following command:

find /nginx -user nginx

find /usr/local/nginx/html -user nginx

Make sure you are more ownership of the root or other users, a typical permission settings /usr/local/nginx/html/

ls -l /usr/local/nginx/html/

Sample output:

-rw-r-r- 1 root root 925 Jan 3 00:50 error4xx.html

-rw-r-r- 1 root root 52 Jan 3 10:00 error5xx.html

-rw-r-r- 1 root root 134 Jan 3 00:52 index.html

You must delete the backup files from the vi or another text editor to create:

find /nginx -name ‘.?*’ -not -name .ht* -or -name ‘*~’ -or -name ‘*.bak*’ -or -name ‘*.old*’

find /usr/local/nginx/html/ -name ‘.?*’ -not -name .ht* -or -name ‘*~’ -or -name ‘*.bak*’ -or -name ‘*.old*’

To delete these files by -delete option to find command.

Eighth, the outgoing connections limit Nginx

Hackers can use tools such as wget download your local file server. Iptables from using nginx user to block outgoing connections. ipt_owner module tries to match the creator of locally generated packets. The following example allows only users 80 user connections outside.

/sbin/iptables -A OUTPUT -o eth0 -m owner –uid-owner vivek -p tcp –dport 80 -m state –state NEW,ESTABLISHED -j ACCEPT

With the above configuration, your nginx server is already very safe and you can publish web pages. However, you should also find more information on security settings according to your site procedures. For example, wordpress or a third-party program.

nginx is a good web server, providing a full range of speed limit function, the main function module is ngx_http_core_module,
ngx_http_limit_conn_module and ngx_http_limit_req_module, the first module in limit_rate function (limited speed bandwidth),

the latter two modules Literally , functions are limiting connections (limit connection) and restriction request (limit request), these modules are compiled into the default nginx core.

All limits are for IP and therefore CC, DDOS has some defensive role.

Limited bandwidth is very easy to understand, directly on the example

location /mp3 {
limit_rate 200k;
}

There is a way you can make the speed limit is more humane, namely transmission speed after the start of a certain flow,

Such as the first full-speed transmission 1M, then start speed:

location /photo {
limit_rate_after 1m;
limit_rate 100k;
}

Then speak and limit the number of concurrent requests.

Why do these two modules? Because we know that a page is usually more than one child module, such as five pictures, then we request this page initiated a connection,
but this is a connection request that contains the five pictures, which means that a connection can initiate multiple requests . We have to maintain the user experience,
it is to limit the number of connections or requests, to be selected according to actual needs.

limit the number of connections

To restrict access, you must first have a container for connection count, add the following code segment http:

limit_conn_zone $binary_remote_addr zone=addr:5m;

This will create a 5M in memory size, speed pool named addr (each connection occupies 32 or 64 bytes, 5m size which can accommodate tens of thousands of connections, is usually sufficient,
if memory is exhausted 5M , will return to 503)

Next, the need for a different location server (location above) to limit the rate, such as restrictions on the number of concurrent connections per IP is 2,

limit_conn addr 2;

2, limit the number of requests

To limit the number of requests, you must first create a speed pool, add the following code segment at http:

limit_conn_zone $binary_remote_addr zone=addr:5m;

Limit divided into global and local speed limit,

For global speed limit, we only need to be followed by the parameters, such as 20 requests per second, rate = 20r / s, namely:

limit_req_zone $binary_remote_addr zone=perip:5m rate=20r/s;

Sometimes we want to adjust the location segment links, you can help burst parameters

limit_req zone=one burst=50;

If you do not want to delay, there nodelay parameters

limit_req zone=one burst=50 nodelay;

The above is the rate-limiting nginx Introduction, inappropriate, please correct me. As for the specific use of methods which limit must be considered, so as not to damage the user experience.

nginx log filter Web Crawler

Nginx log analysis, when there is a headache many spiders reptiles marks.

Given that most spiders reptiles are called xx-bot or xx-spider, the following methods can be written to a separate log reptiles:

location / {
if ($http_user_agent ~* “bot|spider”) {
access_log /var/log/nginx/spider.access.log;
}
}
Or simply do not write log

location / {
if ($http_user_agent ~* “bot|spider”) {
access_log off;
}
}

Tomcat implement multi-instance use systemd centos 7 RHEL 7

rpm -ivh jdk-8u60-linux-x64.rpm

getent group tomcat || groupadd -r tomcat
getent passwd tomcat || useradd -r -d /opt -s /bin/nologin tomcat

cd /opt
wget http://mirror.nus.edu.sg/apache/tomcat/tomcat-8/v8.0.30/bin/apache-tomcat-8.0.30.tar.gz
tar xzf jdk-8u40-linux-i586.tar.gz

mv apache-tomcat-8.0.30 tomcat01
chown -R tomcat:tomcat tomcat01

tar zxvf apache-tomcat-8.0.30.tar.gz
mv apache-tomcat-8.0.30 tomcat02
chown -R tomcat:tomcat tomcat02

sed -i ‘s/8080/8081/g’ /opt/tomcat01/conf/server.xml
sed -i ‘s/8005/8001/g’ /opt/tomcat01/conf/server.xml
sed -i ‘s/8080/8082/g’ /opt/tomcat02/conf/server.xml
sed -i ‘s/8005/8002/g’ /opt/tomcat02/conf/server.xml

sed -i ‘/8009/d’ /opt/tomcat01/conf/server.xml
sed -i ‘/8009/d’ /opt/tomcat01/conf/server.xml

cd /usr/lib/systemd/system
cat >tomcat01.service <<EOF
[Unit]
Description=Apache Tomcat 7
After=network.target
[Service]
Type=oneshot
ExecStart=/opt/tomcat01/bin/startup.sh
ExecStop=/opt/tomcat01/bin/shutdown.sh
RemainAfterExit=yes
User=tomcat
Group=tomcat
[Install]
WantedBy=multi-user.target
EOF

sed ‘s/tomcat01/tomcat02/g’ tomcat01.service > tomcat02.service

systemctl enable tomcat01
systemctl enable tomcat02
systemctl start tomcat01
systemctl start tomcat02

proxy_cache_path /var/cache/nginx/proxy_cache levels=1:2 keys_zone=static:10m inactive=30d max_size=1g;

upstream tomcat {
ip_hash ;
#hash $remote_addr consistent;
server 127.0.0.1:8081 max_fails=1 fail_timeout=2s ;
server 127.0.0.1:8082 max_fails=1 fail_timeout=2s ;
keepalive 16;
}

server {
listen 80;
server_name tomcat.example.com;

charset utf-8;
access_log /var/log/nginx/tomcat.access.log main;
root /usr/share/nginx/html;
index index.html index.htm index.jsp;

location / {
proxy_pass http://tomcat;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;

proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
proxy_http_version 1.1;
proxy_set_header Connection “”;

add_header X-Backend “$upstream_addr”;
}

location ~* ^.+\.(js|css|ico|gif|jpg|jpeg|png)$ {
proxy_pass http://tomcat ;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;

proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
proxy_http_version 1.1;
proxy_set_header Connection “”;

proxy_cache static;
proxy_cache_key $host$uri$is_args$args;
proxy_cache_valid 200 302 7d;
proxy_cache_valid 404 1m;
proxy_cache_valid any 1h;
add_header X-Cache $upstream_cache_status;

#log_not_found off;
#access_log off;
expires max;
}

location ~ /\.ht {
deny all;
}

}

RHEL -7 Notes

study notes 2

— command line file

1, create and delete files

Create a file touch xxxx

touch -t 20151225 create a file and specify time properties

rm xxx delete the file

rm -rf forced to delete files

2. Create a directory and delete the directory

mkdir -p xxx/yyy recursively create directories;

rmdir xxx delete empty directories;

rm -rf XXX forcibly remove non-empty directories;

3, copy files and directories

cp /path1/xxx /path2/xxx/

cp -p /path1/xxx /path2/yyy copy files to retain the original file attributes;

cp -Rf (or -rf)/path1/ /path2/ copy files to a directory directory path1 path2

cp -a cp -dR –preserve=all

4 Cut files

mv /path1/xx /path2/yy

5 View Files

cat xx less
more
less

3– redirection and piping

Redirect correct content:

cat xx.file> yy.file equivalent

cat xx.file 1> yy.file redirected to the file contents yy.file, will overwrite the original file contents;

cat /etc/passwd & >> /tmp/xx

cat /etc/passwd > /tmp/xx 2>&1

tail -f /var/log/messages >/tmp/xx 2>/tmp/yy

ps aux | grep tail | grep -v ‘grep’

ls -al /dev/

to redirect the file contents input to a command

tr

cat > /tmp/xxx <<EOF
>test1
>test2
>test3
>EOF

cat <<EOF> /tmp/xxx
>test2
>test3
>test4
>EOF

number of rows grep -n before looking for the number of rows plus -i ignore case, -A 3 looking for then displayed, B 3 show the number of rows -v looking for negative keywords before after content, -q Do not display output;

grep -n1 B1 -A root /etc/passwd

ifconfig | grep ‘inet’|grep -v ‘inet6’| awk ‘BEGIN{print “IP\t\tnetmask”}{print $2,”\t”,$4}END{}’

ifconfig | grep ‘inet’|grep -v ‘inet6’| tee -a /tmp/yy|awk ‘BEGIN{print “IP\t\tnetmask”}{print $2,”\t”,$4}END{}’

4 – Vim editor use

1, gedit graphical editing files

2, Vim operating a file, if the file exists is opened, if the file does not exist, it is created:

When using Vim to edit the file to open, the default is the command-line mode:

4. When you edit the file, enter the command line insert mode, press the following keys to enter:

i, from the current cursor enters;

a, from the current cursor one character to enter;

o, in the current row into the next row;

I, from the current cursor jump to the beginning of the line and into the Bank;

A, skip to the end and enter the Bank;

O, in the Bank of the line to insert a row;

r, to replace the current character;

R, to replace the current character and move to the next character;

number + G: jump to a specific row, such as 10G jumps to line 10, GG jump to the last line, gg jump to the first line;

number + yy: copy current number of rows down, in any line can be pasted by p;

number + dd: Cut down the number of rows currently in any line adhesive according to p;

u: undo the last step of the operation;

ctrl + r: to restore the previous step;

ctrl + v: to enter visual block mode, the cursor moves up and down, select the content, press y copy selected content, at any position by pasting;

Fast beginning of the line to add a comment #, move the cursor to select the line, then press I to the start position,

press #, press ESC to exit

#abrt:x:173:173::/etc/abrt:/sbin/nologin
#pulse:x:171:171:PulseAudio System Daemon:/var/run/pulse:/sbin/nologin
#gdm:x:42:42::/var/lib/gdm:/sbin/nologin
#gnome-initial-setup:x:993:991::/run/gnome-initial-setup/:/sbin/nologin
:split?ctrl+w

To view detailed help Vim, you can enter Vimtutor.

5, line mode to save the file, find, property settings, replacement and other operations

Enter the last line mode, ESC from insert into command mode, enter 🙁 / or generally used to find, n from the look down, N from the look-up)

Save: wq to save and exit, or x;

Force Quit: q without saving the file content;!

Display line numbers: set nu, if the default display line numbers, you will need to modify the home directory vimrc file or / etc / vimrc, no file is created, insert a row set nu;

Switching specified line: direct input line number;

Replace: 1,$s /old /new /g Replace globally all

m, ns /old /new /g replace m row to match the contents of all n rows, on behalf of the current line, $representatives last line,
$ -. 1 represents the penultimate line, (1, $) can also be used to replace%,
They are expressed full text. If you want to match the contents of which have special characters such as /, *, etc., to be added in front of the escape character \

You can use the s # old # new #, using the # separator, the special characters do not need to escape;

Find backslash below, if you want to ignore the case, look at the back of the content plus \ c, for example: / servername \ c

study notes 5– manage users and user groups

[root@RHEL7HARDEN /]# passwd –help
Usage: passwd [OPTION…] <accountName>
-k, –keep-tokens keep non-expired authentication tokens
-d, –delete delete the password for the named account (root only)
-l, –lock lock the password for the named account (root only)
-u, –unlock unlock the password for the named account (root only)
-e, –expire expire the password for the named account (root only)
-f, –force force operation
-x, –maximum=DAYS maximum password lifetime (root only)
-n, –minimum=DAYS minimum password lifetime (root only)
-w, –warning=DAYS number of days warning users receives before password expiration (root only)
-i, –inactive=DAYS number of days after password expiration when an account becomes disabled (root only)
-S, –status report password status on the named account (root only)
–stdin read new tokens from stdin (root only)

Help options:
-?, –help Show this help message
–usage Display brief usage message
[root@RHEL7HARDEN /]#

[root@RHEL7HARDEN /]# chage –help
Usage: chage [options] LOGIN

Options:
-d, –lastday LAST_DAY set date of last password change to LAST_DAY
-E, –expiredate EXPIRE_DATE set account expiration date to EXPIRE_DATE
-h, –help display this help message and exit
-I, –inactive INACTIVE set password inactive after expiration
to INACTIVE
-l, –list show account aging information
-m, –mindays MIN_DAYS set minimum number of days before password
change to MIN_DAYS
-M, –maxdays MAX_DAYS set maximim number of days before password
change to MAX_DAYS
-R, –root CHROOT_DIR directory to chroot into
-W, –warndays WARN_DAYS set expiration warning days to WARN_DAYS

root@RHEL7HARDEN /]# useradd –help
Usage: useradd [options] LOGIN
useradd -D
useradd -D [options]

Options:
-b, –base-dir BASE_DIR base directory for the home directory of the
new account
-c, –comment COMMENT GECOS field of the new account
-d, –home-dir HOME_DIR home directory of the new account
-D, –defaults print or change default useradd configuration
-e, –expiredate EXPIRE_DATE expiration date of the new account
-f, –inactive INACTIVE password inactivity period of the new account
-g, –gid GROUP name or ID of the primary group of the new
account
-G, –groups GROUPS list of supplementary groups of the new
account
-h, –help display this help message and exit
-k, –skel SKEL_DIR use this alternative skeleton directory
-K, –key KEY=VALUE override /etc/login.defs defaults
-l, –no-log-init do not add the user to the lastlog and
faillog databases
-m, –create-home create the user’s home directory
-M, –no-create-home do not create the user’s home directory
-N, –no-user-group do not create a group with the same name as
the user
-o, –non-unique allow to create users with duplicate
(non-unique) UID
-p, –password PASSWORD encrypted password of the new account
-r, –system create a system account
-R, –root CHROOT_DIR directory to chroot into
-s, –shell SHELL login shell of the new account
-u, –uid UID user ID of the new account
-U, –user-group create a group with the same name as the user
-Z, –selinux-user SEUSER use a specific SEUSER for the SELinux user mapping

[root@RHEL7HARDEN /]# usermod –help
Usage: usermod [options] LOGIN

Options:
-c, –comment COMMENT new value of the GECOS field
-d, –home HOME_DIR new home directory for the user account
-e, –expiredate EXPIRE_DATE set account expiration date to EXPIRE_DATE
-f, –inactive INACTIVE set password inactive after expiration
to INACTIVE
-g, –gid GROUP force use GROUP as new primary group
-G, –groups GROUPS new list of supplementary GROUPS
-a, –append append the user to the supplemental GROUPS
mentioned by the -G option without removing
him/her from other groups
-h, –help display this help message and exit
-l, –login NEW_LOGIN new value of the login name
-L, –lock lock the user account
-m, –move-home move contents of the home directory to the
new location (use only with -d)
-o, –non-unique allow using duplicate (non-unique) UID
-p, –password PASSWORD use encrypted password for the new password
-R, –root CHROOT_DIR directory to chroot into
-s, –shell SHELL new login shell for the user account
-u, –uid UID new UID for the user account
-U, –unlock unlock the user account
-Z, –selinux-user SEUSER new SELinux user mapping for the user account

[root@RHEL7HARDEN /]# useradd test1
[root@RHEL7HARDEN /]# mkdir /home/test
root@RHEL7HARDEN /]# usermod -d /home/test1 test
usermod: user ‘test’ does not exist
[root@RHEL7HARDEN /]# cp -a /etc/skel/.[^.]* /home/test/
[root@RHEL7HARDEN /]# groups test1
test1 : test1
usermod -a -G mohan test1

usermod -g

[root@RHEL7HARDEN /]# gpasswd –help
Usage: gpasswd [option] GROUP

Options:
-a, –add USER add USER to GROUP
-d, –delete USER remove USER from GROUP
-h, –help display this help message and exit
-Q, –root CHROOT_DIR directory to chroot into
-r, –delete-password remove the GROUP’s password
-R, –restrict restrict access to GROUP to its members
-M, –members USER,… set the list of members of GROUP
-A, –administrators ADMIN,…
set the list of administrators for GROUP
Except for the -A and -M options, the options cannot be combined.