Senin, 22 April 2019

How to solve the problem "caching_sha2_password" problem


How to solve the problem "caching_sha2_password" problem ?

if you log in to MySQL and find an error like below

[root@localhost ~]# mysql -h 127.0.0.1 -P 3306 -u root -p
Enter password:
ERROR 2059 (HY000): Authentication plugin 'caching_sha2_password' cannot be loaded: /usr/lib64/mysql/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory

So I found the reason for that error message (at least for my case). It's because MySQL as of version 8.04 and onwards uses caching_sha2_password as default authentication plugin where previously mysql_native_password has been used.

This obviously causes compatibility issues with older services that expect mysql_native_password authentication.

Solutions:
  1. Check for an updated version of the client service you are using (most recent workbench for instance).

  2. Downgrade the MySQL Server to a version below that change.

  3. Change the authentication plugin on a per user basis (I didn't find a global option, maybe there exists one though).

Now regarding option 3 this is as simple as altering the user:

mysql> ALTER USER root IDENTIFIED WITH mysql_native_password BY 'password';

or when creating the user:

mysql> CREATE USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'password';



Selasa, 09 April 2019

How to FIX ORA-00257: archiver error. Connect internal only, until freed.


How to FIX ORA-00257: archiver error. Connect internal only, until freed.


Cause: The archiver process received an error while trying to archive a redo log. If the problem is not resolved soon, the database will stop executing transactions. The most likely cause of this message is the destination device is out of space to store the redo log file.

ORA-00257 is a common error in Oracle. You will usually see ORA-00257 upon connecting to the database because you have encountered a maximum in the flash recovery area (FRA), or db_recovery_file_dest_size .

First, make sure your automatic archiving is enabled. To check the archive lo made, try:

SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 2671
Next log sequence to archive 2676
Current log sequence 2676

Now, note that you can find archive destinations if you are using a destination of USE_DB_RECOVERY_FILE_DEST by:

SQL> show parameter db_recovery_file_dest;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest                string      /oracle/fast_recovery_area
db_recovery_file_dest_size           big integer 32G

The next step in resolving ORA-00257 is to find out what value is being used for db_recovery_file_dest_size, use:

SQL> SELECT * FROM V$RECOVERY_FILE_DEST;

NAME                                     SPACE_LIMIT SPACE_USED SPACE_RECLAIMABLE NUMBER_OF_FILES
---------------------------------------- ----------- ---------- ----------------- ---------------   
/oracle/fast_recovery_area                3.8174E+10 3.8174E+10                 0             327

You may find that the SPACE_USED is the same as SPACE_LIMIT,

The first step to fixing this issue would be to log into your server and see if you have run out of physical space on one of your disks or mounts. If you have run out of physical space you have a few options.

You can backup your archivelogs and delete input, or

# rman target /

RMAN> BACKUP ARCHIVELOG ALL DELETE ALL INPUT;

you can just delete the archivelogs if you won’t need them to restore your database.

# rman target /
   
RMAN> DELETE NOPROMPT ARCHIVELOG ALL;

you can Increase db_recovery_file_dest_size

SQL> alter system set db_recovery_file_dest_size=64g scope=both;

After all you can restart database

SQL> shutdown immediate;
SQL> startup

Setting up NSQ as a Service on CentOS 7 (SystemD)


In this tutorial I will give an example of making nsqlookup, nsqd and nsqadmin run as service in centos 7 (systemD)

first create a configuration file for each of these services and all configurations are placed in the /etc/nsq directory


for the nsqlookupd configuration file it is nsqlookupd.conf and make it like the file below

# cd /etc/nsq
# vi nsqlookupd.conf

## log verbosity level: debug, info, warn, error, or fatal
log-level = "info"

## <addr>:<port> to listen on for TCP clients
tcp_address = "0.0.0.0:4160"

## <addr>:<port> to listen on for HTTP clients
http_address = "0.0.0.0:4161"

## address that will be registered with lookupd (defaults to the OS hostname)
broadcast_address = "main"


## duration of time a producer will remain in the active list since its last ping
inactive_producer_timeout = "300s"

## duration of time a producer will remain tombstoned if registration remains
tombstone_lifetime = "45s"


for the nsqd configuration file it is nsqd.conf and make it like the file below

# vi nsqd.conf

## log verbosity level: debug, info, warn, error, or fatal
#log-level = "info"

## unique identifier (int) for this worker (will default to a hash of hostname)
# id = 5150

## <addr>:<port> to listen on for TCP clients
tcp_address = "0.0.0.0:4150"

## <addr>:<port> to listen on for HTTP clients
http_address = "0.0.0.0:4151"

## <addr>:<port> to listen on for HTTPS clients
# https_address = "0.0.0.0:4152"

## address that will be registered with lookupd (defaults to the OS hostname)
broadcast_address = "127.0.0.1"

## cluster of nsqlookupd TCP addresses
nsqlookupd_tcp_addresses = [
    "127.0.0.1:4160"
]

## duration to wait before HTTP client connection timeout
http_client_connect_timeout = "2s"

## duration to wait before HTTP client request timeout
http_client_request_timeout = "5s"

## path to store disk-backed messages
data_path = "/var/lib/nsq"

## number of messages to keep in memory (per topic/channel)
mem_queue_size = 10000

## number of bytes per diskqueue file before rolling
max_bytes_per_file = 104857600

## number of messages per diskqueue fsync
sync_every = 2500

## duration of time per diskqueue fsync (time.Duration)
sync_timeout = "2s"

## duration to wait before auto-requeing a message
msg_timeout = "60s"

## maximum duration before a message will timeout
max_msg_timeout = "15m"

## maximum size of a single message in bytes
max_msg_size = 1024768

## maximum requeuing timeout for a message
max_req_timeout = "1h"

## maximum size of a single command body
max_body_size = 5123840

## maximum client configurable duration of time between client heartbeats
max_heartbeat_interval = "60s"

## maximum RDY count for a client
max_rdy_count = 2500

## maximum client configurable size (in bytes) for a client output buffer
max_output_buffer_size = 65536

## maximum client configurable duration of time between flushing to a client (time.Duration)
max_output_buffer_timeout = "1s"

## UDP <addr>:<port> of a statsd daemon for pushing stats
# statsd_address = "127.0.0.1:8125"

## prefix used for keys sent to statsd (%s for host replacement)
statsd_prefix = "nsq.%s"

## duration between pushing to statsd (time.Duration)
statsd_interval = "60s"

## toggle sending memory and GC stats to statsd
statsd_mem_stats = true

## the size in bytes of statsd UDP packets
# statsd_udp_packet_size = 508

## message processing time percentiles to keep track of (float)
e2e_processing_latency_percentiles = [
    1.0,
    0.99,
    0.95
]

## calculate end to end latency quantiles for this duration of time (time.Duration)
e2e_processing_latency_window_time = "10m"

## path to certificate file
tls_cert = ""

## path to private key file
tls_key = ""

## set policy on client certificate (require - client must provide certificate,
##  require-verify - client must provide verifiable signed certificate)
# tls_client_auth_policy = "require-verify"

## set custom root Certificate Authority
# tls_root_ca_file = ""

## require client TLS upgrades
tls_required = false

## minimum TLS version ("ssl3.0", "tls1.0," "tls1.1", "tls1.2")
tls_min_version = ""

## enable deflate feature negotiation (client compression)
deflate = true

## max deflate compression level a client can negotiate (> values == > nsqd CPU usage)
max_deflate_level = 6

## enable snappy feature negotiation (client compression)
snappy = true


for the nsqadmin configuration file it is nsqadmin.conf and make it like the file below

# vi  nsqadmin.conf

## log verbosity level: debug, info, warn, error, or fatal
log-level = "info"

## <addr>:<port> to listen on for HTTP clients
http_address = "0.0.0.0:4171"

## graphite HTTP address
graphite_url = ""

## proxy HTTP requests to graphite
proxy_graphite = false

## prefix used for keys sent to statsd (%s for host replacement, must match nsqd)
statsd_prefix = "nsq.%s"

## format of statsd counter stats
statsd_counter_format = "stats.counters.%s.count"

## format of statsd gauge stats
statsd_gauge_format = "stats.gauges.%s"

## time interval nsqd is configured to push to statsd (must match nsqd)
statsd_interval = "60s"

## HTTP endpoint (fully qualified) to which POST notifications of admin actions will be sent
notification_http_endpoint = ""

## nsqlookupd HTTP addresses
nsqlookupd_http_addresses = [
    "127.0.0.1:4161"
]

## nsqd HTTP addresses (optional)
#nsqd_http_addresses = [
#    "127.0.0.1:4151"
#]


let's make a service for all nsq service
  • nsqlookupd

  • # cd /etc/systemd/system
    # vim nsqlookupd.service

    [Unit]
    Description=nsqlookup daemon Service
    After=network.target

    [Service]
    PrivateTmp=yes
    ExecStart=/usr/local/nsq/bin/nsqlookupd -config /etc/nsq/nsqlookupd.conf
    Restart=always

    [Install]
    WantedBy=multi-user.target

  • nsqd

  • # vim nsqd.service

    [Unit]
    Description=Realtime distributed messaging (nsqd)
    After=network.target

    [Service]
    PrivateTmp=yes
    ExecStart=/usr/local/nsq/bin/nsqd -config /etc/nsq/nsqd.conf
    Restart=always

    [Install]
    WantedBy=multi-user.target

  • nsqadmin

  • v# vim nsqadmin.service

    [Unit]
    Description=nsqadmin daemon Service (nsqadmin)
    After=network.target

    [Service]
    PrivateTmp=yes
    ExecStart=/usr/local/nsq/bin/nsqadmin -config /etc/nsq/nsqadmin.conf
    Restart=always

    [Install]
    WantedBy=multi-user.target

enable all nsq service

# systemctl enable nsqlookupd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/nsqlookupd.service to /etc/systemd/system/nsqlookupd.service.

# systemctl enable nsqd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/nsqd.service to /etc/systemd/system/nsqd.service.

# systemctl enable nsqadmin.service
Created symlink from /etc/systemd/system/multi-user.target.wants/nsqadmin.service to /etc/systemd/system/nsqadmin.service.

here we go start the service

# systemctl start nsqlookupd
# systemctl start nsqd
# systemctl start nsqadmin

to check service run or not

# systemctl status nsqlookupd
● nsqlookupd.service - nsqlookup daemon Service
   Loaded: loaded (/etc/systemd/system/nsqlookupd.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-04-09 03:38:40 EDT; 1h 41min ago
 Main PID: 5193 (nsqlookupd)
   CGroup: /system.slice/nsqlookupd.service
           └─5193 /usr/local/nsq/bin/nsqlookupd -config /etc/nsq/nsqlookupd.c...

Apr 09 05:18:06 main.coba.net nsqlookupd[5193]: [nsqlookupd] 2019/04/09 05:1...)
Hint: Some lines were ellipsized, use -l to show in full.

# systemctl status nsqd
● nsqd.service - Realtime distributed messaging (nsqd)
   Loaded: loaded (/etc/systemd/system/nsqd.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-04-09 04:58:21 EDT; 22min ago
 Main PID: 5234 (nsqd)
   CGroup: /system.slice/nsqd.service
           └─5234 /usr/local/nsq/bin/nsqd -config /etc/nsq/nsqd.conf

Apr 09 04:58:21 main.coba.net nsqd[5234]: [nsqd] 2019/04/09 04:58:21.319327 ...0
Hint: Some lines were ellipsized, use -l to show in full.

# systemctl status nsqadmin
● nsqadmin.service - nsqadmin daemon Service (nsqadmin)
   Loaded: loaded (/etc/systemd/system/nsqadmin.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-04-09 04:59:13 EDT; 21h ago
 Main PID: 5249 (nsqadmin)
   CGroup: /system.slice/nsqadmin.service
           └─5249 /usr/local/nsq/bin/nsqadmin -config /etc/nsq/nsqadmin.conf

Apr 09 04:59:13 main.coba.net systemd[1]: Started nsqadmin daemon Service (n....
Apr 09 04:59:13 main.coba.net nsqadmin[5249]: [nsqadmin] 2019/04/09 04:59:13...)
Hint: Some lines were ellipsized, use -l to show in full


That all's

Senin, 08 April 2019

NSQ messaging Installation


Installation

This is pretty quick for your local setup
Untar the tarball into ~/bin folder Upon untarring ensure that the ~/bin folder has all the files named nsq_

$ wget https://s3.amazonaws.com/bitly-downloads/nsq/nsq-0.2.31.darwin-amd64.go1.3.1.tar.gz
$ tar xvfz nsq-0.2.31.darwin-amd64.go1.3.1.tar.gz
$ sudo mkdir -p /usr/local/nsq/bin
$ sudo mv nsq-0.2.31.darwin-amd64.go1.3.1/bin/* /usr/local/nsq/bin

Setup NSQ and PATH environment variables with root privileges:

# echo 'export NSQROOT=/usr/local/nsq' | tee -a /etc/profile
# echo 'export PATH=$PATH:/usr/local/nsq/bin' | tee -a /etc/profile

Start the following daemons

$ source /etc/profile
$ nsqlookupd &
$ nsqd --lookupd-tcp-address=127.0.0.1:4160
$ nsqadmin --lookupd-http-address=127.0.0.1:4161

If done successfully you will be able to view a web UI that looks like this:




nsqadmin is a Web UI to view aggregated cluster stats in realtime and perform various administrative tasks.

requires the golang for the next step, if there is no golang on your system you can install with the tutorial at the following link


Writing Your Go Program

I like creating the consumer first so I can see the handler in action after pushing a message with a producer (see next section).

please create directory src in your home directory and in src directory create consumer and producer directory

$ mkdir src
$ cd src
$ mkdir consumer producer

Get  go client library

$ go get -u -v github.com/bitly/go-nsq
$ go get -u -v github.com/nsqio/go-nsq

Creating a consumer

create file like below in ~/src/consumer/consumer01.go

package main

import (
    "log"
    "sync"

    "github.com/nsqio/go-nsq"
)

func main() {
    wg := &sync.WaitGroup{}
    wg.Add(1)

    decodeConfig := nsq.NewConfig()
    c, err := nsq.NewConsumer("NSQ_Topic", "NSQ_Channel", decodeConfig)
    if err != nil {
        log.Panic("Could not create consumer")
    }
    //c.MaxInFlight defaults to 1

    c.AddHandler(nsq.HandlerFunc(func(message *nsq.Message) error {
        log.Println("NSQ message received:")
        log.Println(string(message.Body))
        return nil
    }))

    err = c.ConnectToNSQD("127.0.0.1:4150")
    if err != nil {
        log.Panic("Could not connect")
    }
    log.Println("Awaiting messages from NSQ topic \"NSQ Topic\"...")
    wg.Wait()
}

Now run this consumer program:

$ go run consumer01.go

You’ll get this output:


2019/04/08 03:41:33 INF    1 [NSQ_Topic/NSQ_Channel] (127.0.0.1:4150) connecting to nsqd
2019/04/08 03:41:33 Awaiting messages from NSQ topic "NSQ Topic"...

This should hang there waiting to receive a NSQ message from a topic you specify. Nothing will happen just yet since there aren’t any queued up messages for this particular topic.
Leave this program running in a terminal window for now. In the next step we’ll push a message to it.

Creating a Producer

You can publish a message with a producer with some simple code like this:

create file like below in ~/src/producer/producer01.go

package main

import (
  "log"

  "github.com/nsqio/go-nsq"
)

func main() {
    config := nsq.NewConfig()
    p, err := nsq.NewProducer("127.0.0.1:4150", config)
    if err != nil {
        log.Panic(err)
    }
    err = p.Publish("NSQ_Topic", []byte("sample NSQ message"))
    if err != nil {
        log.Panic(err)
    }
}

Now run this publisher program:

$ go run producer01.go

In this terminal window, you’ll only see this message indicating your message was published to NSQ:

go run producer01.go
2019/04/08 03:43:35 INF    1 (127.0.0.1:4150) connecting to nsqd

If you look at your consumer terminal window that you left running from the previous step, you’ll now see this additional output:

2019/04/08 03:41:33 Awaiting messages from NSQ topic "NSQ Topic"...
2019/04/08 03:43:35 NSQ message received:
2019/04/08 03:43:35 sample NSQ message topic

Dont  forget add  firewall rule

# firewall-cmd --zone=public --add-port=4171/tcp
# firewall-cmd --zone=public --add-port=4171/tcp --permanent
# firewall-cmd --zone=public --add-port=4150/tcp
# firewall-cmd --zone=public --add-port=4150/tcp --permanent
# firewall-cmd --zone=public --add-port=4151/tcp
# firewall-cmd --zone=public --add-port=4151/tcp --permanent
# firewall-cmd --zone=public --add-port=4160/tcp
# firewall-cmd --zone=public --add-port=4160/tcp --permanent
# firewall-cmd --zone=public --add-port=4161/tcp
# firewall-cmd --zone=public --add-port=4161/tcp --permanent

Congrats - you just pushed and received your first NSQ message!

If you go back to your web UI console you’ll see your newly-created topic. If you drill into this topic, you can also see the channel that you consumed the message to, with the message counter at 1:


Minggu, 07 April 2019

Check CPU Nagios Using SNMP on Centos


SNMP stands for simple network management protocol. It is a way that servers can share information about their current state, and also a channel through which an administer can modify pre-defined values. While the protocol itself is very simple, the structure of programs that implement SNMP can be very complex.

If you still didn’t install Nagios Core , check the following articles. How to Install and Configure Nagios on CentOS

In this article we will show you how to install and configure SNMP in the remote server and how to add the host to Nagios Core.

Scenario

In this tutorial i am going to use two systems as mentioned below.

Nagios server (host):

Operating system : CentOS 6 minimal server
IP Address       : 192.168.1.150/24

Nagios client (remote):

Operating System : CentOS 6 minimal server
IP Address       : 192.168.1.152/24



Installing SNMP Agent on remote machine


  1. Install SNMP and SNMP Utilities

  2. yum -y install net-snmp net-snmp-utils

  3. Add a Basic Configuration

  4. mv /etc/snmp/snmpd.conf /etc/snmp/snmpd.conf.orig
    cp /dev/null /etc/snmp/snmpd.conf
    vim /etc/snmp/snmpd.conf

    Insert the following text into the new /etc/snmp/snmpd.conf

    # Map 'idv90we3rnov90wer' community to the 'ConfigUser'
    # Map '209ijvfwer0df92jd' community to the 'AllUser'
    #       sec.name        source          community
    com2sec ConfigUser      default         idv90we3rnov90wer
    com2sec AllUser         default         209ijvfwer0df92jd
    # Map 'ConfigUser' to 'ConfigGroup' for SNMP Version 2c
    # Map 'AllUser' to 'AllGroup' for SNMP Version 2c
    #                       sec.model       sec.name
    group   ConfigGroup     v2c             ConfigUser
    group   AllGroup        v2c             AllUser
    # Define 'SystemView', which includes everything under .1.3.6.1.2.1.1 (or .1.3.6.1.2.1.25.1)
    # Define 'AllView', which includes everything under .1
    #                       incl/excl       subtree
    view    SystemView      included        .1.3.6.1.2.1.1
    view    SystemView      included        .1.3.6.1.2.1.25.1.1
    view    AllView         included        .1
    # Give 'ConfigGroup' read access to objects in the view 'SystemView'
    # Give 'AllGroup' read access to objects in the view 'AllView'
    #                       context model   level   prefix  read            write   notify
    access  ConfigGroup     ""      any     noauth  exact   SystemView      none    none
    access  AllGroup        ""      any     noauth  exact   AllView         none    none

    Exit vim, and restart the SNMP service to reload the new configuration file:

    on Centos 6.x

    service snmpd restart

    on Centos 7.x

    systemctl restart snmpd

    Configure SNMP to start when the server boots:

    on Centos 6.x

    chkconfig snmpd on

    on Centos 7.x

    systemctl enable snmpd

  5. Test the SNMP Configuration

  6. from localhost on remote

    snmpwalk -v 2c -c 209ijvfwer0df92jd -O e 127.0.0.1

    from nagios host

    snmpwalk -v 2c -c 209ijvfwer0df92jd -O e 192.168.1.152


Configure Nagios To Monitor The Linux Host

  1. Install plugins check_cpu for snmp download from below

  2. https://exchange.nagios.org/directory/Plugins/Operating-Systems/Solaris/check_cpu/details

    Put check_cpu into your Nagios libexec directory
    (e.g. /usr/local/Nagios/libexec, or wherever Nagios is installed on your server)

    Create the symbolic links, being careful not to overwrite any files by the same name already there

    ln -s check_cpu check_load
    ln -s check_cpu check_ram
    ln -s check_cpu check_swap

    (these symlinks dictate how the script is run, as it can check CPU, RAM, load and swap, depending on how it is invoked)

  3. Insert the following text into /usr/local/nagios/etc/objects/commands.conf


  4. #snmp check_cpu

    define command{
    command_name check_cpu
    command_line $USER1$/check_cpu -H $HOSTADDRESS$ -w $ARG1$ -c $ARG2$ -p $ARG3$
    }

    define command{
    command_name check_swap
    command_line $USER1$/check_swap -H $HOSTADDRESS$ -w $ARG1$ -c $ARG2$ -p $ARG3$
    }

    define command{
    command_name check_ram
    command_line $USER1$/check_ram -H $HOSTADDRESS$ -w $ARG1$ -c $ARG2$ -p $ARG3$ -o $ARG4$
    }

    define command{
    command_name check_load
    command_line $USER1$/check_load -H $HOSTADDRESS$ -w $ARG1$ -c $ARG2$ -p $ARG3$
    }



  5. Insert the following text into /usr/local/nagios/etc/servers/hostname.cfg


  6. define host{

    use                             linux-server
    host_name                       hostname
    alias                           hostname
    address                         192.168.10.70
    max_check_attempts              5
    check_period                    24x7
    notification_interval           30
    notification_period             24x7

    }


    # Check CPU with check_cpu snmpd

    define service{
            use                             local-service         ; Name of service template to use
            host_name                       hostname
            service_description             Check CPU
            check_command                   check_cpu!70%!80%!209ijvfwer0df92jd
            }

    define service{
            use                             local-service         ; Name of service template to use
            host_name                       hostname
            service_description             Check Swap Memory
            check_command                   check_swap!70!90!209ijvfwer0df92jd
            }

    define service{
            use                             local-service         ; Name of service template to use
            host_name                       hostname
            service_description             Check Memory
            check_command                   check_ram!70!90!209ijvfwer0df92jd
            }

    define service{
            use                             local-service         ; Name of service template to use
            host_name                       hostname
            service_description             Check Load Average
            check_command                   check_load!60,70,80!70,80,90!209ijvfwer0df92jd
            }

  7. Restart Nagios

  8. Next, verify Nagios Configuration files for any errors.

    /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
    Total Warnings: 0
    Total Errors:   0

    Finally, restart Nagios.

    On Centos 6.x

    service nagios restart

    On Centos 7.x

    systemctl restart nagios

    Log into the web interface via : http://[SERVER_IP]/nagios , enter your login information and check for new Linux hosts added in nagios core service.
    That’s all.

    Congratulations! Enjoy your Monitoring platform Nagios Core.


Jumat, 05 April 2019

how to change listener ports on Oracle

This tutorial explains how to change listener ports on Oracle

  1. Stop the Oracle listener using the following command:

  2. [oracle@trialsd admin]$ lsnrctl stop

    LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 05-APR-2019 09:56:07

    Copyright (c) 1991, 2014, Oracle.  All rights reserved.

    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.10.31)(PORT=1521)))
    The command completed successfully


  3. Change the port number in the Oracle listener.ora file. For example, from the default port 1521 to 1522.

  4. [oracle@trialsd admin]$ cd /oracle/product/12.1.0/db_1/network/admin
    [oracle@trialsd admin]$ more listener.ora
    # listener.ora Network Configuration File: /oracle/product/12.1.0/db_1/network/admin/listener.ora
    # Generated by Oracle configuration tools.

    LISTENER =
      (DESCRIPTION_LIST =
        (DESCRIPTION =
          (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.10.31)(PORT = 1521))
          (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
        )
      )

    Use your favorite editor to change port in listener.ora change like this I use vi to edit

    [oracle@trialsd admin]$ vi listener.ora

    [oracle@trialsd admin]$ more listener.ora
    # listener.ora Network Configuration File: /oracle/product/12.1.0/db_1/network/admin/listener.ora
    # Generated by Oracle configuration tools.

    LISTENER =
      (DESCRIPTION_LIST =
        (DESCRIPTION =
          (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.10.31)(PORT = 1522))
          (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
        )
      )


  5. Change the port number in the tnsnames.ora like file listener.ora

  6. [oracle@trialsd admin]$ more tnsnames.ora
    # tnsnames.ora Network Configuration File: /oracle/product/12.1.0/db_1/network/admin/tnsnames.ora
    # Generated by Oracle configuration tools.

    ORCL =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.10.31)(PORT = 1521))
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = orcl)
        )
      )

    Use your favorite editor to change port in tnsnames.ora change like this I use vi to edit

    [oracle@trialsd admin]$ vi tnsnames.ora
    [oracle@trialsd admin]$
    [oracle@trialsd admin]$ more tnsnames.ora
    # tnsnames.ora Network Configuration File: /oracle/product/12.1.0/db_1/network/admin/tnsnames.ora
    # Generated by Oracle configuration tools.

    ORCL =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.10.31)(PORT = 1522))
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = orcl)
        )
      )


  7. Restart the Oracle listener using the following command:

  8. [oracle@trialsd admin]$ lsnrctl start

    LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 05-APR-2019 10:41:52

    Copyright (c) 1991, 2014, Oracle.  All rights reserved.

    Starting /oracle/product/12.1.0/db_1/bin/tnslsnr: please wait...

    TNSLSNR for Linux: Version 12.1.0.2.0 - Production
    System parameter file is /oracle/product/12.1.0/db_1/network/admin/listener.ora
    Log messages written to /oracle/diag/tnslsnr/trialsd/listener/alert/log.xml
    Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.10.31)(PORT=1522)))
    Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1522)))

    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.10.31)(PORT=1522)))
    STATUS of the LISTENER
    ------------------------
    Alias                     LISTENER
    Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
    Start Date                05-APR-2019 10:41:52
    Uptime                    0 days 0 hr. 0 min. 0 sec
    Trace Level               off
    Security                  ON: Local OS Authentication
    SNMP                      OFF
    Listener Parameter File   /oracle/product/12.1.0/db_1/network/admin/listener.ora
    Listener Log File         /oracle/diag/tnslsnr/trialsd/listener/alert/log.xml
    Listening Endpoints Summary...
      (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.10.31)(PORT=1522)))
      (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1522)))
    The listener supports no services
    The command completed successfully


  9. Change the port to which the database is listening:

  10. [oracle@trialsd admin]$ sqlplus / as sysdba

    SQL*Plus: Release 12.1.0.2.0 Production on Fri Apr 5 10:45:57 2019

    Copyright (c) 1982, 2014, Oracle.  All rights reserved.


    Connected to:
    Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
    With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

    SQL> alter system set LOCAL_LISTENER="(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1522))";

    System altered.

    SQL> alter system register;

    System altered.


  11. Check the listener status using the following command:

  12. [oracle@trialsd admin]$ lsnrctl status

    LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 05-APR-2019 11:08:47

    Copyright (c) 1991, 2014, Oracle.  All rights reserved.

    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.10.31)(PORT=1522)))
    STATUS of the LISTENER
    ------------------------
    Alias                     LISTENER
    Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
    Start Date                05-APR-2019 11:04:46
    Uptime                    0 days 0 hr. 4 min. 0 sec
    Trace Level               off
    Security                  ON: Local OS Authentication
    SNMP                      OFF
    Listener Parameter File   /oracle/product/12.1.0/db_1/network/admin/listener.ora
    Listener Log File         /oracle/diag/tnslsnr/trialsd/listener/alert/log.xml
    Listening Endpoints Summary...
      (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.10.31)(PORT=1522)))
      (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1522)))
      (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=192.168.10.31)(PORT=5500))(Security=(my_wallet_directory=/oracle/admin/orcl/xdb_wallet))(Presentation=HTTP)(Session=RAW))
    Services Summary...
    Service "orcl" has 1 instance(s).
      Instance "orcl", status READY, has 1 handler(s) for this service...
    Service "orcl" has 1 instance(s).
      Instance "orcl", status READY, has 1 handler(s) for this service...
    Service "orclXDB" has 1 instance(s).
      Instance "orcl", status READY, has 1 handler(s) for this service...
    The command completed successfully