Pagina 1 di 3 1 2 3 ultimoultimo
Visualizzazione dei risultati da 1 a 10 su 25

Discussione: Server Irraggiungibile

  1. #1

    Server Irraggiungibile

    Salve a tutti. Possiedo un VPS su ovh che monta l'OS Centos 7 64 bit

    Pochi giorni fa dopo un reboot il servizio č diventato irragiungibile
    infatti quando lo pingo da windows mi da errore

    Esecuzione di Ping 193.70.2.123 con 32 byte di dati:
    Risposta da 164.132.232.25: Host di destinazione non raggiungibile.
    Risposta da 164.132.232.25: Host di destinazione non raggiungibile.
    Risposta da 164.132.232.25: Host di destinazione non raggiungibile.

    Statistiche Ping per 193.70.2.123:
    Pacchetti: Trasmessi = 3, Ricevuti = 3,
    Persi = 0 (0% persi),

    Successivamente ho quindi chiesto all'host cosa fare e mi hanno detto di entrare da rescue mode (modalitā provvisoria) e leggere i logs i quali affermano che effettivamente 2 processi di sistema sono stati killati

    Di seguito ecco i vari logs del sistema

    CRON:
    codice:
    Mar 12 03:09:04 vps336855 run-parts(/etc/cron.daily)[25312]: finished logrotateMar 12 03:09:04 vps336855 run-parts(/etc/cron.daily)[25285]: starting makewhatis.cron
    Mar 12 03:09:09 vps336855 run-parts(/etc/cron.daily)[25458]: finished makewhatis.cron
    Mar 12 03:09:09 vps336855 anacron[23829]: Job `cron.daily' terminated
    Mar 12 03:09:09 vps336855 anacron[23829]: Normal exit (1 job run)
    Mar 12 04:01:01 vps336855 CROND[2623]: (root) CMD (run-parts /etc/cron.hourly)
    Mar 12 04:01:01 vps336855 run-parts(/etc/cron.hourly)[2623]: starting 0anacron
    Mar 12 04:01:01 vps336855 run-parts(/etc/cron.hourly)[2632]: finished 0anacron
    Mar 12 05:01:01 vps336855 CROND[13739]: (root) CMD (run-parts /etc/cron.hourly)
    Mar 12 05:01:01 vps336855 run-parts(/etc/cron.hourly)[13739]: starting 0anacron
    Mar 12 05:01:01 vps336855 run-parts(/etc/cron.hourly)[13748]: finished 0anacron
    Mar 12 06:01:01 vps336855 CROND[25117]: (root) CMD (run-parts /etc/cron.hourly)
    Mar 12 06:01:02 vps336855 run-parts(/etc/cron.hourly)[25117]: starting 0anacron
    Mar 12 06:01:02 vps336855 run-parts(/etc/cron.hourly)[25126]: finished 0anacron
    Mar 12 07:01:01 vps336855 CROND[3769]: (root) CMD (run-parts /etc/cron.hourly)
    Mar 12 07:01:01 vps336855 run-parts(/etc/cron.hourly)[3769]: starting 0anacron
    Mar 12 07:01:01 vps336855 run-parts(/etc/cron.hourly)[3778]: finished 0anacron
    Mar 12 08:01:01 vps336855 CROND[15115]: (root) CMD (run-parts /etc/cron.hourly)
    Mar 12 08:01:01 vps336855 run-parts(/etc/cron.hourly)[15115]: starting 0anacron
    Mar 12 08:01:01 vps336855 run-parts(/etc/cron.hourly)[15124]: finished 0anacron
    Mar 12 09:01:01 vps336855 CROND[26218]: (root) CMD (run-parts /etc/cron.hourly)
    Mar 12 09:01:01 vps336855 run-parts(/etc/cron.hourly)[26218]: starting 0anacron
    Mar 12 09:01:01 vps336855 run-parts(/etc/cron.hourly)[26227]: finished 0anacron
    Mar 12 10:01:01 vps336855 CROND[4884]: (root) CMD (run-parts /etc/cron.hourly)
    Mar 12 10:01:01 vps336855 run-parts(/etc/cron.hourly)[4884]: starting 0anacron
    Mar 12 10:01:01 vps336855 run-parts(/etc/cron.hourly)[4893]: finished 0anacron
    Mar 12 11:01:01 vps336855 CROND[16031]: (root) CMD (run-parts /etc/cron.hourly)
    Mar 12 11:01:01 vps336855 run-parts(/etc/cron.hourly)[16031]: starting 0anacron
    Mar 12 11:01:01 vps336855 run-parts(/etc/cron.hourly)[16040]: finished 0anacron
    Mar 12 12:01:01 vps336855 CROND[27205]: (root) CMD (run-parts /etc/cron.hourly)
    Mar 12 12:01:01 vps336855 run-parts(/etc/cron.hourly)[27205]: starting 0anacron
    Mar 12 12:01:01 vps336855 run-parts(/etc/cron.hourly)[27214]: finished 0anacron
    Mar 12 13:01:01 vps336855 CROND[5826]: (root) CMD (run-parts /etc/cron.hourly)
    Mar 12 13:01:01 vps336855 run-parts(/etc/cron.hourly)[5826]: starting 0anacron
    Mar 12 13:01:01 vps336855 run-parts(/etc/cron.hourly)[5835]: finished 0anacron
    Mar 12 14:01:01 vps336855 CROND[17079]: (root) CMD (run-parts /etc/cron.hourly)
    Mar 12 14:01:01 vps336855 run-parts(/etc/cron.hourly)[17079]: starting 0anacron
    Mar 12 14:01:01 vps336855 run-parts(/etc/cron.hourly)[17088]: finished 0anacron
    Mar 12 15:01:01 vps336855 CROND[28227]: (root) CMD (run-parts /etc/cron.hourly)
    Mar 12 15:01:01 vps336855 run-parts(/etc/cron.hourly)[28227]: starting 0anacron
    Mar 12 15:01:01 vps336855 run-parts(/etc/cron.hourly)[28236]: finished 0anacron
    Mar 12 16:01:01 vps336855 CROND[6889]: (root) CMD (run-parts /etc/cron.hourly)
    Mar 12 16:01:01 vps336855 run-parts(/etc/cron.hourly)[6889]: starting 0anacron
    LOG AVVIO

    codice:
    ESC%G           Welcome to CentOSStarting udev: udevd[403]: can not read '/etc/udev/rules.d/75-persistent-net-generator.rules'
    udevd[403]: can not read '/etc/udev/rules.d/75-persistent-net-generator.rules'
    ^M
    ESC%G[  OK  ]^M
    Setting hostname vps336855.ovh.net:  [  OK  ]^M
    Checking filesystems
    Checking all file systems.
    [/sbin/fsck.ext4 (1) -- /] fsck.ext4 -a /dev/vda1
    /dev/vda1: clean, 117341/2621440 files, 4110822/10485504 blocks
    [  OK  ]^M
    Remounting root filesystem in read-write mode:  [  OK  ]^M
    Mounting local filesystems:  [  OK  ]^M
    Enabling /etc/fstab swaps:  [  OK  ]^M
    Entering non-interactive startup
    ip6tables: Applying firewall rules: [  OK  ]^M
    iptables: Applying firewall rules: [  OK  ]^M
    Bringing up loopback interface:  [  OK  ]^M
    Bringing up interface eth0:  Determining if ip address 193.70.2.123 is already in use for device eth0...
    [  OK  ]^M
    Starting auditd: [  OK  ]^M
    Starting system logger: [  OK  ]^M
    Mounting filesystems:  [  OK  ]^M
    Starting acpi daemon: [  OK  ]^M
    Retrigger failed udev events[  OK  ]^M
    No kdump initial ramdisk found.[WARNING]^M
    Rebuilding /boot/initrd-2.6.32-642.13.1.el6.x86_64kdump.img
    Starting kdump:[  OK  ]^M
    Starting sshd: [  OK  ]^M
    Starting postfix: [  OK  ]^M
    Starting crond: [  OK  ]^M
    PING LOOPBACK

    codice:
    root@rescue-pro:/mnt/sdb1# ifconfig
    eth0      Link encap:Ethernet  HWaddr fa:16:3e:10:ab:63
              inet addr:193.70.2.123  Bcast:193.70.2.123  Mask:255.255.255.255
              inet6 addr: fe80::f816:3eff:fe10:ab63/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:2424 errors:0 dropped:0 overruns:0 frame:0
              TX packets:1994 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:1973988 (1.8 MiB)  TX bytes:232318 (226.8 KiB)
    
    lo        Link encap:Local Loopback
              inet addr:127.0.0.1  Mask:255.0.0.0
              inet6 addr: ::1/128 Scope:Host
              UP LOOPBACK RUNNING  MTU:65536  Metric:1
              RX packets:42 errors:0 dropped:0 overruns:0 frame:0
              TX packets:42 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:9268 (9.0 KiB)  TX bytes:9268 (9.0 KiB)
    Ultima modifica di Giusepe98PG; 20-03-2017 a 21:09

  2. #2
    hai centos7 con un kernel 2.6.32-642.13.1.el6 ? hai systemd? vedi cosa e' andato male dal: journalctl ?

  3. #3
    @sacarde journalctl non va in rescue mode per quanto riguarda il kernel č questo

    root@rescue-pro:~# uname -a
    Linux rescue-pro 3.16.0-4-amd64 #1 SMP Debian 3.16.39-1+deb8u2 (2017-03-07) x86_64 GNU/Linux

  4. #4
    facendo un port scanning con nmap risulta

    codice:
    C:\Users\Giuseppe98PG\Desktop\nmap>nmap -sS -p22 -v -Pn 193.70.2.123
    
    
    Starting Nmap 7.40 ( https://nmap.org ) at 2017-03-20 23:25 ora solare Europa occidentale
    Initiating Parallel DNS resolution of 1 host. at 23:25
    Completed Parallel DNS resolution of 1 host. at 23:25, 0.08s elapsed
    Initiating SYN Stealth Scan at 23:25
    Scanning 123.ip-193-70-2.eu (193.70.2.123) [1 port]
    Completed SYN Stealth Scan at 23:25, 2.25s elapsed (1 total ports)
    Nmap scan report for 123.ip-193-70-2.eu (193.70.2.123)
    Host is up.
    PORT   STATE    SERVICE
    22/tcp filtered ssh
    
    
    Read data files from: C:\Users\Giuseppe98PG\Desktop\nmap
    Nmap done: 1 IP address (1 host up) scanned in 4.17 seconds
               Raw packets sent: 2 (88B) | Rcvd: 0 (0B)

  5. #5
    - per vedere che versione hai: lsb_release -a

    - il tuo "uname" mostra un kernel debian !!!

    - una volta in rescuemode puoi avviare il runlevel3 con: systemctl isolate multi-user.target
    e poi vedere se e qiuali errori da: journalctl

    - dal nmap si vede che hai attivo il server ssh, ... ti fa connettere ?

  6. #6
    - root@rescue-pro:/mnt/sdb1/var/log# lsb_release -aNo LSB modules are available.
    Distributor ID: Debian
    Description: Debian GNU/Linux 8.7 (jessie)
    Release: 8.7
    Codename: jessie


    - root@rescue-pro:/mnt/sdb1/var/log# uname
    Linux

    - se avvio il runlevel3 mi caccia dal server e non mi permette l'accesso

    - No. Mi fa connettere solo in rescue mode

  7. #7
    - il sistema operativo che hai e' un Debian-8.7

    - sei sicuro che avviando in modalita' "rescue" non sia accessibile il comando: journalctl ?

  8. #8
    Quote Originariamente inviata da sacarde Visualizza il messaggio
    - il sistema operativo che hai e' un Debian-8.7

    - sei sicuro che avviando in modalita' "rescue" non sia accessibile il comando: journalctl ?
    root@rescue-pro:/mnt/sdb1/var/log# journalctl
    No journal files were found.
    root@rescue-pro:/mnt/sdb1/var/log#

  9. #9
    - da controllare con comandi:

    systemctl --failed (elenca i servizi che hanno dato errori)

    systemctl list-unit-files (elenca tutti i servizi avviati/avviabili)

    ci dovresti avere:
    systemd-journal-flush.service static
    systemd-journald.service static
    systemd-journald-dev-log.socket static
    systemd-journald.socket static


    - systemctl status systemd-journald.service (vede lo stato del servizio)

    - dentro il file: /etc/systemd/journald.conf hai righe non commentate?



    p.s.
    le prove le stai facendo con utente root, vero?
    Ultima modifica di sacarde; 21-03-2017 a 18:53

  10. #10
    Quote Originariamente inviata da sacarde Visualizza il messaggio
    - da controllare con comandi:

    systemctl --failed (elenca i servizi che hanno dato errori)

    systemctl list-unit-files (elenca tutti i servizi avviati/avviabili)

    ci dovresti avere:
    systemd-journal-flush.service static
    systemd-journald.service static
    systemd-journald-dev-log.socket static
    systemd-journald.socket static



    - dentro il file: /etc/systemd/journald.conf hai righe non commentate?



    p.s.
    le prove le stai facendo con utente root, vero?
    codice:
    root@rescue-pro:/mnt/sdb1/etc# systemctl --failed
      UNIT                         LOAD   ACTIVE SUB    DESCRIPTION
    ● smartd.service               loaded failed failed Self Monitoring and Reporting Technology (SMART) Daemon
    ● systemd-modules-load.service loaded failed failed Load Kernel Modules
    
    
    LOAD   = Reflects whether the unit definition was properly loaded.
    ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
    SUB    = The low-level unit activation state, values depend on unit type.
    
    
    2 loaded units listed. Pass --all to see loaded but inactive units, too.
    To show all installed unit files use 'systemctl list-unit-files'.
    root@rescue-pro:/mnt/sdb1/etc#
    codice:
    root@rescue-pro:/mnt/sdb1/etc# systemctl list-unit-files
    UNIT FILE                              STATE
    proc-sys-fs-binfmt_misc.automount      static
    dev-hugepages.mount                    static
    dev-mqueue.mount                       static
    proc-sys-fs-binfmt_misc.mount          static
    sys-fs-fuse-connections.mount          static
    sys-kernel-config.mount                static
    sys-kernel-debug.mount                 static
    tmp.mount                              disabled
    acpid.path                             enabled
    systemd-ask-password-console.path      static
    systemd-ask-password-wall.path         static
    acpid.service                          disabled
    atd.service                            enabled
    autovt@.service                        disabled
    bootlogd.service                       masked
    bootlogs.service                       masked
    bootmisc.service                       masked
    checkfs.service                        masked
    checkroot-bootclean.service            masked
    checkroot.service                      masked
    console-getty.service                  disabled
    console-shell.service                  disabled
    container-getty@.service               static
    cron.service                           enabled
    cryptdisks-early.service               masked
    cryptdisks.service                     masked
    dbus-org.freedesktop.hostname1.service static
    dbus-org.freedesktop.locale1.service   static
    dbus-org.freedesktop.login1.service    static
    dbus-org.freedesktop.machine1.service  static
    dbus-org.freedesktop.timedate1.service static
    dbus.service                           static
    debian-fixup.service                   static
    debug-shell.service                    disabled
    emergency.service                      static
    fuse.service                           masked
    getty-static.service                   static
    getty@.service                         enabled
    halt-local.service                     static
    halt.service                           masked
    hostname.service                       masked
    lines 1-42
    Si ovviamente le sto facendo in rescue mode con l'utente root

    non ho la cartella systemd comunque

Tag per questa discussione

Permessi di invio

  • Non puoi inserire discussioni
  • Non puoi inserire repliche
  • Non puoi inserire allegati
  • Non puoi modificare i tuoi messaggi
  •  
Powered by vBulletin® Version 4.2.1
Copyright © 2017 vBulletin Solutions, Inc. All rights reserved.