2021/11/05

之前複製流量都是從 switch 設定 port mirror 
但因為port數量有限制 
想說從linux上來處理 找到 daemonlogger 還滿方便的 

 daemonlogger -i eth1 -o eth2 -d 

如果是proxmox 在 guest  設定沒用
要在host上設定

2021/11/02

有個guest在備分時偶爾會出現以下的錯誤
怪的狀況不是每天
而是不定時會發生

109: 2021-11-01 23:34:40 INFO: Starting Backup of VM 109 (qemu)
109: 2021-11-01 23:34:40 INFO: status = running
109: 2021-11-01 23:34:40 INFO: VM Name: 0.226-examtest
109: 2021-11-01 23:34:40 INFO: include disk 'virtio0' 'nfs218_system:109/vm-109-disk-0.qcow2' 100G
109: 2021-11-01 23:34:40 INFO: backup mode: snapshot
109: 2021-11-01 23:34:40 INFO: ionice priority: 7
109: 2021-11-01 23:34:40 INFO: creating Proxmox Backup Server archive 'vm/109/2021-11-01T15:34:40Z'
109: 2021-11-01 23:34:41 INFO: started backup task '2ba8928b-bd3b-4e90-a62f-7fd1a37d51d8'
109: 2021-11-01 23:34:41 INFO: resuming VM again
109: 2021-11-01 23:34:44 ERROR: VM 109 qmp command 'cont' failed - got timeout
109: 2021-11-01 23:34:44 INFO: aborting backup job
109: 2021-11-01 23:34:44 INFO: resuming VM again
109: 2021-11-01 23:34:44 ERROR: Backup of VM 109 failed - VM 109 qmp command 'cont' failed - got timeout


查了一下forum有人碰到相同的情況

是建議修改

/usr/share/perl5/PVE/QMPClient.pm 

in line 134

      } else {
            $timeout = 3; # default
to
      } else {
            $timeout = 8; # default

我直接把 $timeout改為30

restarted the pve daemons

for service in pvedaemon.service pveproxy.service pvestatd.service ;do
     echo "systemctl restart $service"
     systemctl restart $service
  done


目前就先醬改了
再觀察看看


2021/10/31

proxmox更新到7版後

在管理介面 mount nfs 可以選 v3 或 v4














用fio測一下 v3 和 v4 的效能差別

config 如下


# This job file tries to mimic the Intel IOMeter File Server Access Pattern

[global]

description=Emulation of Intel IOmeter File Server Access Pattern


[iometer]

bssplit=512/10:1k/5:2k/5:4k/60:8k/2:16k/4:32k/4:64k/10

filename=PhysicalDrive1:PhysicalDrive2:PhysicalDrive3

size=10G

rw=randrw

#set read 50% write 50%

rwmixread=50

direct=1

runtime=60

# IOMeter defines the server loads as the following:

# iodepth=1                    Linear

# iodepth=4                    Very Light

# iodepth=8                    Light

# iodepth=64                    Moderate

# iodepth=256                    Heavy

iodepth=64


測三次 取最後一次的值如下


nfs v4


iometer: (g=0): rw=randrw, bs=(R) 512B-64.0KiB, (W) 512B-64.0KiB, (T) 512B-64.0KiB, ioengine=psync, iodepth=64

fio-3.25

Starting 1 process

Jobs: 1 (f=3): [m(1)][100.0%][r=168KiB/s,w=107KiB/s][r=11,w=12 IOPS][eta 00m:00s]

iometer: (groupid=0, jobs=1): err= 0: pid=2342174: Sun Oct 31 07:28:44 2021

  Description  : [Emulation of Intel IOmeter File Server Access Pattern]

  read: IOPS=55, BW=610KiB/s (625kB/s)(35.7MiB/60016msec)

    clat (usec): min=430, max=339828, avg=10019.94, stdev=13081.64

     lat (usec): min=431, max=339828, avg=10020.43, stdev=13081.64

    clat percentiles (usec):

     |  1.00th=[   775],  5.00th=[  3228], 10.00th=[  4015], 20.00th=[  5473],

     | 30.00th=[  6587], 40.00th=[  7832], 50.00th=[  9110], 60.00th=[ 10159],

     | 70.00th=[ 11338], 80.00th=[ 12387], 90.00th=[ 13304], 95.00th=[ 14222],

     | 99.00th=[ 43254], 99.50th=[ 99091], 99.90th=[208667], 99.95th=[299893],

     | 99.99th=[341836]

   bw (  KiB/s): min=   72, max= 1180, per=100.00%, avg=613.90, stdev=270.71, samples=119

   iops        : min=    8, max=   90, avg=56.03, stdev=18.14, samples=119

  write: IOPS=57, BW=592KiB/s (606kB/s)(34.7MiB/60016msec); 0 zone resets

    clat (usec): min=316, max=218978, avg=7624.96, stdev=11376.82

     lat (usec): min=316, max=218979, avg=7626.12, stdev=11376.98

    clat percentiles (usec):

     |  1.00th=[   400],  5.00th=[   457], 10.00th=[   490], 20.00th=[   586],

     | 30.00th=[  1172], 40.00th=[  4113], 50.00th=[  6128], 60.00th=[  8094],

     | 70.00th=[ 10028], 80.00th=[ 11994], 90.00th=[ 13960], 95.00th=[ 18482],

     | 99.00th=[ 37487], 99.50th=[ 85459], 99.90th=[152044], 99.95th=[212861],

     | 99.99th=[219153]

   bw (  KiB/s): min=   18, max= 1278, per=100.00%, avg=595.21, stdev=271.81, samples=119

   iops        : min=    6, max=  106, avg=58.24, stdev=19.98, samples=119

  lat (usec)   : 500=6.12%, 750=8.19%, 1000=1.17%

  lat (msec)   : 2=1.67%, 4=7.70%, 10=39.02%, 20=33.10%, 50=2.17%

  lat (msec)   : 100=0.41%, 250=0.41%, 500=0.03%

  cpu          : usr=0.16%, sys=0.57%, ctx=6838, majf=0, minf=14

  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%

     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

     issued rwts: total=3339,3476,0,0 short=0,0,0,0 dropped=0,0,0,0

     latency   : target=0, window=0, percentile=100.00%, depth=64


Run status group 0 (all jobs):

   READ: bw=610KiB/s (625kB/s), 610KiB/s-610KiB/s (625kB/s-625kB/s), io=35.7MiB (37.5MB), run=60016-60016msec

  WRITE: bw=592KiB/s (606kB/s), 592KiB/s-592KiB/s (606kB/s-606kB/s), io=34.7MiB (36.4MB), run=60016-60016msec



nfs v3


iometer: (g=0): rw=randrw, bs=(R) 512B-64.0KiB, (W) 512B-64.0KiB, (T) 512B-64.0KiB, ioengine=psync, iodepth=64

fio-3.25

Starting 1 process

Jobs: 1 (f=3): [m(1)][100.0%][r=581KiB/s,w=609KiB/s][r=51,w=42 IOPS][eta 00m:00s]

iometer: (groupid=0, jobs=1): err= 0: pid=2347167: Sun Oct 31 07:35:09 2021

  Description  : [Emulation of Intel IOmeter File Server Access Pattern]

  read: IOPS=49, BW=548KiB/s (561kB/s)(32.1MiB/60012msec)

    clat (usec): min=394, max=285769, avg=9756.36, stdev=10269.42

     lat (usec): min=394, max=285770, avg=9756.86, stdev=10269.42

    clat percentiles (usec):

     |  1.00th=[   889],  5.00th=[  3294], 10.00th=[  4113], 20.00th=[  5407],

     | 30.00th=[  6718], 40.00th=[  7898], 50.00th=[  8979], 60.00th=[ 10290],

     | 70.00th=[ 11338], 80.00th=[ 12387], 90.00th=[ 13435], 95.00th=[ 14484],

     | 99.00th=[ 30278], 99.50th=[ 56361], 99.90th=[206570], 99.95th=[233833],

     | 99.99th=[287310]

   bw (  KiB/s): min=    8, max= 1118, per=100.00%, avg=548.19, stdev=256.60, samples=119

   iops        : min=    2, max=   86, avg=49.50, stdev=15.73, samples=119

  write: IOPS=51, BW=522KiB/s (534kB/s)(30.6MiB/60012msec); 0 zone resets

    clat (usec): min=357, max=221586, avg=10035.07, stdev=12250.59

     lat (usec): min=358, max=221587, avg=10036.14, stdev=12250.75

    clat percentiles (usec):

     |  1.00th=[   482],  5.00th=[   627], 10.00th=[  1172], 20.00th=[  4146],

     | 30.00th=[  5735], 40.00th=[  7111], 50.00th=[  8586], 60.00th=[ 10159],

     | 70.00th=[ 11600], 80.00th=[ 13042], 90.00th=[ 16188], 95.00th=[ 21103],

     | 99.00th=[ 47449], 99.50th=[ 93848], 99.90th=[168821], 99.95th=[196084],

     | 99.99th=[221250]

   bw (  KiB/s): min=   34, max= 1039, per=100.00%, avg=523.79, stdev=223.74, samples=119

   iops        : min=    6, max=   82, avg=51.63, stdev=15.50, samples=119

  lat (usec)   : 500=0.73%, 750=4.03%, 1000=0.87%

  lat (msec)   : 2=0.71%, 4=7.69%, 10=44.35%, 20=37.52%, 50=3.37%

  lat (msec)   : 100=0.38%, 250=0.33%, 500=0.02%

  cpu          : usr=0.25%, sys=0.59%, ctx=6076, majf=0, minf=15

  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%

     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

     issued rwts: total=2968,3090,0,0 short=0,0,0,0 dropped=0,0,0,0

     latency   : target=0, window=0, percentile=100.00%, depth=64


Run status group 0 (all jobs):

   READ: bw=548KiB/s (561kB/s), 548KiB/s-548KiB/s (561kB/s-561kB/s), io=32.1MiB (33.7MB), run=60012-60012msec

  WRITE: bw=522KiB/s (534kB/s), 522KiB/s-522KiB/s (534kB/s-534kB/s), io=30.6MiB (32.1MB), run=60012-60012msec


初步看起來 v4 稍微好一些

2021/10/24

在all ssd 的 nas 上用 fio 測了一次io

config 如下

# This job file tries to mimic the Intel IOMeter File Server Access Pattern
[global]
description=Emulation of Intel IOmeter File Server Access Pattern

[iometer]
bssplit=512/10:1k/5:2k/5:4k/60:8k/2:16k/4:32k/4:64k/10
filename=\\.\PhysicalDrive1:\\.\PhysicalDrive2:\\.\PhysicalDrive3
size=1G
rw=randrw
#set read 50% write 50%
rwmixread=50
direct=1
runtime=60                  
# IOMeter defines the server loads as the following:
# iodepth=1                    Linear
# iodepth=4                    Very Light
# iodepth=8                    Light
# iodepth=64                    Moderate
# iodepth=256                    Heavy
iodepth=64


4個模式結果如下

default no cache

iometer: (g=0): rw=randrw, bs=(R) 512B-64.0KiB, (W) 512B-64.0KiB, (T) 512B-64.0KiB, ioengine=psync, iodepth=64
fio-3.7
Starting 1 process
iometer: Laying out IO files (3 files / total 1023MiB)
Jobs: 1 (f=3): [m(1)][100.0%][r=12.1MiB/s,w=12.3MiB/s][r=2912,w=2900 IOPS][eta 00m:00s]
iometer: (groupid=0, jobs=1): err= 0: pid=1059: Sun Oct 24 21:59:29 2021
  Description  : [Emulation of Intel IOmeter File Server Access Pattern]
   read: IOPS=2643, BW=15.0MiB/s (16.7MB/s)(518MiB/32459msec)
    clat (usec): min=98, max=3023, avg=151.62, stdev=52.08
     lat (usec): min=100, max=3025, avg=153.64, stdev=52.78
    clat percentiles (usec):
     |  1.00th=[  108],  5.00th=[  113], 10.00th=[  116], 20.00th=[  120],
     | 30.00th=[  124], 40.00th=[  129], 50.00th=[  137], 60.00th=[  145],
     | 70.00th=[  153], 80.00th=[  167], 90.00th=[  217], 95.00th=[  262],
     | 99.00th=[  326], 99.50th=[  367], 99.90th=[  498], 99.95th=[  644],
     | 99.99th=[ 1074]
   bw (  KiB/s): min=   17, max=29187, per=100.00%, avg=16368.95, stdev=5319.54, samples=64
   iops        : min=    6, max= 3237, avg=2636.11, stdev=553.54, samples=64
  write: IOPS=2632, BW=15.6MiB/s (16.4MB/s)(506MiB/32459msec)
    clat (usec): min=109, max=269781, avg=208.42, stdev=1809.64
     lat (usec): min=111, max=269787, avg=210.60, stdev=1809.68
    clat percentiles (usec):
     |  1.00th=[  117],  5.00th=[  122], 10.00th=[  126], 20.00th=[  131],
     | 30.00th=[  137], 40.00th=[  141], 50.00th=[  151], 60.00th=[  159],
     | 70.00th=[  167], 80.00th=[  182], 90.00th=[  233], 95.00th=[  277],
     | 99.00th=[  343], 99.50th=[  388], 99.90th=[13435], 99.95th=[32637],
     | 99.99th=[77071]
   bw (  KiB/s): min=   25, max=27107, per=100.00%, avg=16004.09, stdev=5117.56, samples=64
   iops        : min=    8, max= 3423, avg=2627.02, stdev=552.46, samples=64
  lat (usec)   : 100=0.01%, 250=93.02%, 500=6.83%, 750=0.06%, 1000=0.02%
  lat (msec)   : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.03%
  lat (msec)   : 100=0.01%, 250=0.01%, 500=0.01%
  cpu          : usr=4.79%, sys=12.52%, ctx=171263, majf=0, minf=36
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=85804,85461,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=15.0MiB/s (16.7MB/s), 15.0MiB/s-15.0MiB/s (16.7MB/s-16.7MB/s), io=518MiB (543MB), run=32459-32459msec
  WRITE: bw=15.6MiB/s (16.4MB/s), 15.6MiB/s-15.6MiB/s (16.4MB/s-16.4MB/s), io=506MiB (531MB), run=32459-32459msec

Disk stats (read/write):
  vda: ios=85701/85498, merge=0/34, ticks=11506/17166, in_queue=28672, util=99.78%

===================================================================

direct sync

iometer: (g=0): rw=randrw, bs=(R) 512B-64.0KiB, (W) 512B-64.0KiB, (T) 512B-64.0KiB, ioengine=psync, iodepth=64
fio-3.7
Starting 1 process
Jobs: 1 (f=3): [m(1)][100.0%][r=9873KiB/s,w=9.85MiB/s][r=2326,w=2311 IOPS][eta 00m:00s]
iometer: (groupid=0, jobs=1): err= 0: pid=1024: Sun Oct 24 22:01:57 2021
  Description  : [Emulation of Intel IOmeter File Server Access Pattern]
   read: IOPS=2588, BW=15.6MiB/s (16.4MB/s)(518MiB/33143msec)
    clat (usec): min=101, max=2854, avg=153.73, stdev=51.58
     lat (usec): min=103, max=2855, avg=155.67, stdev=52.05
    clat percentiles (usec):
     |  1.00th=[  109],  5.00th=[  115], 10.00th=[  118], 20.00th=[  123],
     | 30.00th=[  126], 40.00th=[  133], 50.00th=[  141], 60.00th=[  147],
     | 70.00th=[  155], 80.00th=[  169], 90.00th=[  215], 95.00th=[  260],
     | 99.00th=[  334], 99.50th=[  375], 99.90th=[  519], 99.95th=[  660],
     | 99.99th=[ 1045]
   bw (  KiB/s): min= 8901, max=29142, per=100.00%, avg=16021.95, stdev=4651.25, samples=66
   iops        : min= 1870, max= 3118, avg=2589.53, stdev=300.89, samples=66
  write: IOPS=2578, BW=15.3MiB/s (16.0MB/s)(506MiB/33143msec)
    clat (usec): min=119, max=123840, avg=214.82, stdev=1482.62
     lat (usec): min=121, max=123842, avg=216.99, stdev=1482.65
    clat percentiles (usec):
     |  1.00th=[  127],  5.00th=[  133], 10.00th=[  137], 20.00th=[  143],
     | 30.00th=[  147], 40.00th=[  153], 50.00th=[  163], 60.00th=[  174],
     | 70.00th=[  182], 80.00th=[  194], 90.00th=[  241], 95.00th=[  289],
     | 99.00th=[  367], 99.50th=[  424], 99.90th=[ 5866], 99.95th=[29492],
     | 99.99th=[80217]
   bw (  KiB/s): min= 9162, max=26677, per=100.00%, avg=15655.61, stdev=4532.61, samples=66
   iops        : min= 1860, max= 3100, avg=2577.86, stdev=315.86, samples=66
  lat (usec)   : 250=92.52%, 500=7.29%, 750=0.10%, 1000=0.02%
  lat (msec)   : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.02%
  lat (msec)   : 100=0.01%, 250=0.01%
  cpu          : usr=4.74%, sys=11.81%, ctx=171269, majf=0, minf=36
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=85804,85461,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=15.6MiB/s (16.4MB/s), 15.6MiB/s-15.6MiB/s (16.4MB/s-16.4MB/s), io=518MiB (543MB), run=33143-33143msec
  WRITE: bw=15.3MiB/s (16.0MB/s), 15.3MiB/s-15.3MiB/s (16.0MB/s-16.0MB/s), io=506MiB (531MB), run=33143-33143msec

Disk stats (read/write):
  vda: ios=85795/85513, merge=0/20, ticks=11798/16745, in_queue=28543, util=99.74%

===============================================================

write through

iometer: (g=0): rw=randrw, bs=(R) 512B-64.0KiB, (W) 512B-64.0KiB, (T) 512B-64.0KiB, ioengine=psync, iodepth=64
fio-3.7
Starting 1 process
Jobs: 1 (f=3): [m(1)][100.0%][r=13.8MiB/s,w=14.2MiB/s][r=3275,w=3321 IOPS][eta 00m:00s]
iometer: (groupid=0, jobs=1): err= 0: pid=1025: Sun Oct 24 22:03:47 2021
  Description  : [Emulation of Intel IOmeter File Server Access Pattern]
   read: IOPS=2467, BW=14.9MiB/s (15.6MB/s)(518MiB/34776msec)
    clat (usec): min=28, max=2270, avg=105.88, stdev=76.86
     lat (usec): min=30, max=2272, avg=107.86, stdev=77.07
    clat percentiles (usec):
     |  1.00th=[   35],  5.00th=[   36], 10.00th=[   37], 20.00th=[   38],
     | 30.00th=[   40], 40.00th=[   43], 50.00th=[   81], 60.00th=[  135],
     | 70.00th=[  151], 80.00th=[  165], 90.00th=[  194], 95.00th=[  241],
     | 99.00th=[  330], 99.50th=[  371], 99.90th=[  482], 99.95th=[  570],
     | 99.99th=[  971]
   bw (  KiB/s): min= 9946, max=20245, per=99.96%, avg=15240.57, stdev=2279.39, samples=69
   iops        : min= 1582, max= 3690, avg=2457.10, stdev=587.94, samples=69
  write: IOPS=2457, BW=14.6MiB/s (15.3MB/s)(506MiB/34776msec)
    clat (usec): min=135, max=78912, avg=281.99, stdev=1239.11
     lat (usec): min=136, max=78913, avg=284.13, stdev=1239.15
    clat percentiles (usec):
     |  1.00th=[  147],  5.00th=[  151], 10.00th=[  157], 20.00th=[  165],
     | 30.00th=[  178], 40.00th=[  192], 50.00th=[  210], 60.00th=[  245],
     | 70.00th=[  285], 80.00th=[  330], 90.00th=[  375], 95.00th=[  424],
     | 99.00th=[  570], 99.50th=[  644], 99.90th=[ 9110], 99.95th=[28443],
     | 99.99th=[63177]
   bw (  KiB/s): min= 9128, max=20043, per=99.95%, avg=14897.84, stdev=2082.27, samples=69
   iops        : min= 1570, max= 3638, avg=2447.70, stdev=574.29, samples=69
  lat (usec)   : 50=23.06%, 100=2.41%, 250=53.12%, 500=20.28%, 750=0.99%
  lat (usec)   : 1000=0.06%
  lat (msec)   : 2=0.02%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.02%
  lat (msec)   : 100=0.01%
  cpu          : usr=4.38%, sys=11.16%, ctx=171251, majf=0, minf=36
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=85804,85461,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=14.9MiB/s (15.6MB/s), 14.9MiB/s-14.9MiB/s (15.6MB/s-15.6MB/s), io=518MiB (543MB), run=34776-34776msec
  WRITE: bw=14.6MiB/s (15.3MB/s), 14.6MiB/s-14.6MiB/s (15.3MB/s-15.3MB/s), io=506MiB (531MB), run=34776-34776msec

Disk stats (read/write):
  vda: ios=85360/85041, merge=0/18, ticks=7406/22572, in_queue=29978, util=99.75%

================================================================

write back

iometer: (g=0): rw=randrw, bs=(R) 512B-64.0KiB, (W) 512B-64.0KiB, (T) 512B-64.0KiB, ioengine=psync, iodepth=64
fio-3.7
Starting 1 process
Jobs: 1 (f=3): [m(1)][100.0%][r=29.4MiB/s,w=29.9MiB/s][r=6976,w=7009 IOPS][eta 00m:00s]
iometer: (groupid=0, jobs=1): err= 0: pid=1021: Sun Oct 24 22:05:06 2021
  Description  : [Emulation of Intel IOmeter File Server Access Pattern]
   read: IOPS=3871, BW=23.4MiB/s (24.5MB/s)(518MiB/22164msec)
    clat (usec): min=28, max=12350, avg=112.79, stdev=135.36
     lat (usec): min=30, max=12352, avg=114.83, stdev=135.55
    clat percentiles (usec):
     |  1.00th=[   35],  5.00th=[   37], 10.00th=[   38], 20.00th=[   41],
     | 30.00th=[   43], 40.00th=[   46], 50.00th=[   69], 60.00th=[  139],
     | 70.00th=[  157], 80.00th=[  178], 90.00th=[  212], 95.00th=[  262],
     | 99.00th=[  363], 99.50th=[  412], 99.90th=[  603], 99.95th=[  938],
     | 99.99th=[ 5342]
   bw (  KiB/s): min=10682, max=31639, per=100.00%, avg=23930.39, stdev=4126.10, samples=44
   iops        : min= 2212, max= 7366, avg=3861.02, stdev=1436.84, samples=44
  write: IOPS=3855, BW=22.8MiB/s (23.9MB/s)(506MiB/22164msec)
    clat (usec): min=36, max=203000, avg=126.42, stdev=930.79
     lat (usec): min=38, max=203002, avg=128.86, stdev=930.84
    clat percentiles (usec):
     |  1.00th=[   44],  5.00th=[   47], 10.00th=[   49], 20.00th=[   52],
     | 30.00th=[   55], 40.00th=[   58], 50.00th=[   62], 60.00th=[   97],
     | 70.00th=[  153], 80.00th=[  215], 90.00th=[  258], 95.00th=[  285],
     | 99.00th=[  388], 99.50th=[  441], 99.90th=[  693], 99.95th=[ 1221],
     | 99.99th=[15270]
   bw (  KiB/s): min=10302, max=32041, per=100.00%, avg=23399.55, stdev=4090.09, samples=44
   iops        : min= 2148, max= 7414, avg=3846.48, stdev=1417.26, samples=44
  lat (usec)   : 50=28.96%, 100=27.75%, 250=34.74%, 500=8.33%, 750=0.15%
  lat (usec)   : 1000=0.02%
  lat (msec)   : 2=0.02%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
  lat (msec)   : 250=0.01%
  cpu          : usr=7.40%, sys=17.75%, ctx=171252, majf=0, minf=36
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=85804,85461,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=23.4MiB/s (24.5MB/s), 23.4MiB/s-23.4MiB/s (24.5MB/s-24.5MB/s), io=518MiB (543MB), run=22164-22164msec
  WRITE: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=506MiB (531MB), run=22164-22164msec

Disk stats (read/write):
  vda: ios=85645/85340, merge=0/17, ticks=8026/10446, in_queue=18472, util=99.52%
  
  

前三個模式沒有太大的差異
但 write back 效能不管是讀或寫
都快了 1.5倍以上

做為參考


最近有個朋友問到如何用python走rest api 撈graylog的資料

因為一直以來都是用curl

沒用python

今天朋友說他試出來了 感謝他的分享

做個記錄


程式碼如下


import requests


user = 'admin'

pw = 'pwd'

send_format_date_from = '2021-10-21T16:00:00.000Z'

send_format_date_to = '2021-10-21T17:00:00.000Z'

str = 'search key word'

url='http://graylog_ip:9000/api/views/search/messages'

header = {'Accept':'text/csv,application/json', 'Content-Type':'application/json', 'X-Requested-By':'cli'}


#以下是使用絶對時間的語法 上方已定義區間

graylog_send_data={ "streams":["000000000000000000000001"], "timerange":[ "absolute",{ "from":send_format_date_from, "to":send_format_date_to } ], "query_string":{ "type":"elasticsearch", "query_string":str } }


#以下是使用相對時間的語法 range 是以秒為單位

graylog_send_data={ "streams":["000000000000000000000001"], "timerange":{ "type":"relative","range":60 }, "query_string":{ "type":"elasticsearch", "query_string":str } }


r = requests.post(url, auth=(user, pw), headers=header, json=graylog_send_data)


print(r.text)


相對時間或絶對時間擇一使用

2021/10/21

升到 promox 7 後

有几台guest都出現了以下的問題






可是使用  badblocks xfs_repair  進行檢查

都沒有發現任何錯誤

而且看了一下nas各個HD的資訊

也沒有發現任何狀況

還在找原因

之前有一台是發生在swap 區

目前把swap 關掉

然後把 ram 從2G調到 4G

觀察到現在沒有異常

不知道是不是ram 的問題


2021/10/24 更新

發生狀況的有四台几器 共同的情況是這些guest的io都很大

分別處理如下

ntopng因為升版後 system id 變了 所以移至 LXC 後 重新要了新key

librenms 下載了新版的vm 把資料移轉到新几器上

https://docs.librenms.org/Support/FAQ/#how-do-i-move-my-librenms-install-to-another-server


剩下cacti 跟 syslog

從log來看是 write 時候的問題

目前所有的guest hd 預設都是使用 no cache

想說會不會是效能的問題

https://adminkk.blogspot.com/2016/05/wsus-proxmox-winmount-nfs-wsus-iscsi.html


於是把上面二台

一台調成 write back

一台調成 write through

 到目前跑了二天

持續觀察中


2021/10/06


如何在synology的nas上憑証




























2021/10/01

在synology的nas裝好 pbs後

要更新出現以下的錯誤


Err:4 http://download.proxmox.com/debian/pbs bullseye InRelease

  The following signatures couldn't be verified because the public key is not available: NO_PUBKEY DD4BA3917E23BF59

Reading package lists... Done

W: GPG error: http://download.proxmox.com/debian/pbs bullseye InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY DD4BA3917E23BF59

E: The repository 'http://download.proxmox.com/debian/pbs bullseye InRelease' is not signed.

N: Updating from such a repository can't be done securely, and is therefore disabled by default.

N: See apt-secure(8) manpage for repository creation and user configuration details.


應該是GPG key 沒有裝

以下指令安裝

wget https://enterprise.proxmox.com/debian/proxmox-release-bullseye.gpg -O /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg 


再update就沒問題了


https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye#Install_Proxmox_VE


今天有個user拿了一片DVD來

說要給其他看看

本來想說直接把檔案丟到google drive share出去就好

結果有保護

想說轉成MP4好了

試了很多軟体 最後  HandBrake 成功了

參考以下連結

https://ephrain.net/ubuntu-%E4%BD%BF%E7%94%A8-handbrake-%E5%B0%87-dvd-%E8%BD%89%E6%88%90-mp4/

2021/09/19

今天想把原本放在lxc nginx 上的iso 檔放到synology docker
先下載容器














下載後進行佈署



















設定80及22的對應連接埠























範本預設沒有安裝ssh 
所以要手動安裝




















到容器 詳細資料 開啟 終端機

apt update
apt install openssh-server


安裝好執行

service ssh start

docker內無法使用 sytemctl


目前碰到的問題是無法開几自動執行 ssh server
必須每次進終端機 手動啟動


2021/09/07

今天在docker 安裝完pbs後要更新出現以下的錯誤


Hit:1 http://deb.debian.org/debian bullseye InRelease

Hit:2 http://security.debian.org/debian-security bullseye-security InRelease

Hit:3 http://deb.debian.org/debian bullseye-updates InRelease

Get:4 http://download.proxmox.com/debian/pbs bullseye InRelease [3067 B]

Err:4 http://download.proxmox.com/debian/pbs bullseye InRelease

  The following signatures couldn't be verified because the public key is not available: NO_PUBKEY DD4BA3917E23BF59

Reading package lists... Done

W: GPG error: http://download.proxmox.com/debian/pbs bullseye InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY DD4BA3917E23BF59

E: The repository 'http://download.proxmox.com/debian/pbs bullseye InRelease' is not signed.

N: Updating from such a repository can't be done securely, and is therefore disabled by default.

N: See apt-secure(8) manpage for repository creation and user configuration details.


解決方法如下


wget https://enterprise.proxmox.com/debian/proxmox-release-bullseye.gpg -O /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg 



https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye#Install_Proxmox_VE

https://forum.proxmox.com/threads/problem-with-repository-and-upgrade.95020/



2021/09/02

今天有個需求是要破解 win10 的密碼
打算使用kali linux

kali linux開完機 
要使用chntpwd 時出現以下的錯誤


root@sam:/media/sda3/Windows/System32/config# chntpw -i SAM
chntpw version 1.00 140201, (c) Petter N Hagen
openHive(SAM) failed: Read-only file system, trying read-only
openHive(): read error: : Read-only file system
chntpw: Unable to open/read a hive, exiting..

查了一下資料是win10會把partition改成 readonly
解決的方法是先 umount win10 的 partition
下以下的指令

ntfsfix /dev/sda3

接下來再重新mount

就可以正常使用 chntpwd 了 



2021/08/29

在ubuntu 設定 apache2 的SSL


安裝OpenSSL

sudo apt install openssl


啟用Apache2的SSL模組

sudo a2enmod ssl


把憑証放到以下目錄


/etc/pki/tls/certs


修改

/etc/apache2/sites-available/default-ssl.conf


SSLCertificateFile /etc/pki/tls/certs/server.cer

SSLCertificateKeyFile /etc/pki/tls/certs/server.key


cd /etc/apache2/sites-enabled

ln -s ../sites-available/default-ssl.conf


systemctl restart apache2


https://20.65.210.123/index.php/2021/05/12/ubuntu-apache-ssl/

2021/08/28

最近要求要把所有的網站改成 https

以 centos 7 apache 為例

首先取得憑証

因為單位有買整個domain的

所以直接拿來用


把憑証放到以下目錄

/etc/pki/tls/certs


安裝 mod_ssl

$ yum -y install mod_ssl


修改 ssl.conf

$ vi /etc/httpd/conf.d/ssl.conf

修改以下二行

SSLCertificateFile /etc/pki/tls/certs/server.cer 

SSLCertificateKeyFile /etc/pki/tls/certs/server.key


重新啟動 Apache

$ systemctl restart httpd.service


若使用nginx

修改 /etc/nginx/nginx.conf 加入以下藍色設定


server {

  listen 80 default_server;

  listen [::]:80 default_server;


  # 加入 SSL 設定

  listen 443 ssl default_server;

  listen [::]:443 ssl default_server;


  # 憑證與金鑰的路徑

  ssl_certificate /etc/pki/tls/certs/server.cer;

  ssl_certificate_key /etc/pki/tls/certs/server.key;


  # ...

}


systemctl restart nginx


https://www.codepulse.com.tw/zh-tw/ssl%E6%86%91%E8%AD%89%E5%AE%89%E8%A3%9D%E6%95%99%E5%AD%B8%EF%BC%8C%E4%BB%A5centos%E7%82%BA%E4%BE%8B


https://blog.gtwang.org/linux/nginx-create-and-install-ssl-certificate-on-ubuntu-linux/

2021/08/19

因為最近在測試一個新系統需要下載大量資料來驗証
但出現一個問題
我明明下載了二次 9G的iso檔
可是在netflow的統計資料裡卻只有 4G
看了一下其他設備的統計也沒錯
所以應該是吐 netflow那台設備的問題了
檢查完才發現
netflow v5 吐出來記錄 BYTE 這個欄位的長度預設只有4 byte
所以短時間大量流量時便會超過 4 byte 而導致資料不正確
而上述說的設備在 netflow v9 也沿用了  4 byte 
但因為 netflow v9 這個欄位長度可調整
所以目前改為 8 byte
值就正常了

所以結論就是 不要再使用 netflow v5 了

2021/08/10

proxmox 升到7後

LXC的 Centos 7 無法開机 出現以下錯誤


WARN: old systemd (< v232) detected, container won't run in a pure cgroupv2 environment! Please see documentation -> container -> cgroup version.

TASK WARNINGS: 1 


解決方法如下



vi /etc/default/grub


I changed this line from,

GRUB_CMDLINE_LINUX_DEFAULT="quiet"

to

GRUB_CMDLINE_LINUX_DEFAULT="systemd.unified_cgroup_hierarchy=0 quiet"


更新grub

update-grub


重開機


https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysboot_edit_kernel_cmdline

https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0#Old_Container_and_CGroupv2

https://pve.proxmox.com/pve-docs/chapter-pct.html#pct_cgroup_compat

https://forum.proxmox.com/threads/unified-cgroup-v2-layout-upgrade-warning-pve-6-4-to-7-0.92459/

2021/08/04

記錄一下在 ubuntu 20.04 (LXC) 安裝nfsen

OS布署完成後先update

apt update

apt upgrade -y

再來修改時區

timedatectl set-timezone Asia/Taipei


安裝所需 package

apt install -y nfdump rrdtool librrd-dev librrds-perl librrdp-perl libpcap-dev php php-common libsocket6-perl apache2 libapache2-mod-php libmailtools-perl libio-socket-ssl-perl

下載 nfsen

從官方下載的版本在安裝時會出現以下的 bug

Can't use string ("live") as a HASH ref while "strict refs" in use at libexec/NfProfile.pm line 1238.


因此從 https://github.com/p-alik/nfsen/releases/tag/nfsen-1.3.8 下載已修正版本

下載後解壓縮

cd nfsen-nfsen-1.3.8/etc

cp nfsen-dist.conf nfsen.conf

決定安裝路徑在 /opt/nfsen

修改 nfsen.conf

$BASEDIR = "/opt/nfsen";

$PREFIX = '/usr/bin/';

$USER = "www-data";

$WWWUSER = "www-data";

$WWWGROUP = "www-data";

%sources = ( 'upstream1' => {'port'=>'9995','col'=>'#0000ff','type'=>'netflow'}' );


mkdir /opt/nfsen

mkdir /var/www/html/nfsen


adduser netflow


安裝

./install.pl ./etc/nfsen.conf

出現

RRD version '1.7002' not yet supported!


需修改 libexec/NfSenRRD.pm 約在第 76 行 改成 ==> $rrd_version >= 1.2 && $rrd_version < 1.9


再執行一次

./install.pl ./etc/nfsen.conf


啟動

/opt/nfsen/bin/nfsen start


設定開几啟動


cd /etc/systemd/system
ln -s /lib/systemd/system/rc-local.service


vi /etc/rc.local

#!/bin/sh -e

/usr/local/nfsen/bin/nfsen start

exit 0

chmod +x /etc/rc.local 


設定開几不啟動nfdump 因為nfsen啟動時會自行呼叫

systemctl disable nfdump.service


重開几


設定 netflow 導到 udp 9995


打開 http://nfsen_ip/nfsen/nfsen.php 確認是否正常


https://sc8log.blogspot.com/2017/06/ubuntu-1604-netflow.html

https://github.com/p-alik/nfsen/releases/tag/nfsen-1.3.8


https://github.com/p-alik/nfsen/issues/1

https://zoomadmin.com/HowToInstall/UbuntuPackage/libsocket6-perl

2021/08/01

proxmox 升到7後

centos 7 的 LXC 無法開几

出現以下訊息

WARN: old systemd (< v232) detected, container won't run in a pure cgroupv2 environment! Please see documentation -> container -> cgroup version. 

以下連結提供解決方法

不過還是建議升級到centos 8


https://forum.proxmox.com/threads/unified-cgroup-v2-layout-upgrade-warning-pve-6-4-to-7-0.92459/

https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0#Old_Container_and_CGroupv2

https://pve.proxmox.com/pve-docs/chapter-pct.html#pct_cgroup_compat

在proxmox上的LXC一直有個問題 今天終於解決了

記錄一下

當部屬完LXC要更新時出現以下的情況


#dnf -y update

Extra Packages for Enterprise Linux 8 - Next - x86_64          0.0  B/s |   0  B     00:00    

Errors during downloading metadata for repository 'epel-next':

  - Curl error (6): Couldn't resolve host name for https://mirrors.fedoraproject.org/metalink?repo=epel-next-8&arch=x86_64&infra=stock&content=centos [Could not resolve host: mirrors.fedoraproject.org]

Error: Failed to download metadata for repo 'epel-next': Cannot prepare internal mirrorlist: Curl error (6): Couldn't resolve host name for https://mirrors.fedoraproject.org/metalink?repo=epel-next-8&arch=x86_64&infra=stock&content=centos [Could not resolve host: mirrors.fedoraproject.org]


檢查一下 /etc/resolv.conf


cat /etc/resolv.conf 

# Generated by NetworkManager

search abc.com


發現沒有設定nameserver


可是這個值在佈署LXC時確實有輸入


後來發現可能是 NetworkManager 的問題

導致proxmox 要修改 /etc/resolv.conf 的時候出問題

本來是想直接用 NetworkManager 來處理

直接 nmtui 

結果預設沒有安裝

只好再自己裝

# dnf install -y NetworkManager-tui

在 nmtui 裡設定好 nameserver 後重開

在 /etc/resolv.conf 裡還是沒看到 nameserver


手動把nameserver 加入/etc/resolv.conf 也沒用

重開後就會消失了


於是決定把 NetworkManager 停掉

# systemctl stop NetworkManager.service

# systemctl disable NetworkManager.service


再安裝使用 network-scripts

# dnf install -y network-scripts

# systemctl enable network


改完後重開

再看一下 /etc/resolv.conf

# cat /etc/resolv.conf 

# --- BEGIN PVE ---

search abc.com

nameserver 8.8.8.8

# --- END PVE ---

醬就正常了


https://forum.proxmox.com/threads/proxmox-6-0-9-dns-host-settings-reset-every-time.59434/

https://www.thegeekdiary.com/how-to-disable-networkmanager-in-centos-rhel-8/


2021/07/22

網路上可以找到一些m3u8 的列表

所以今天試著在linux上直接播放 m3u8 的檔案

本來是想用VLC

但VLC載入播放清單檔時如果連不上會一直跳錯誤訊息

後來找到 smplayer 

直接 開啟 檔案 載入 m3u8的列表檔後

就可以打開play list 來選擇想看的來源

很方便


https://mobileai.net/2021/07/21/twm3u80721

https://stackoverflow.com/questions/38906626/curl-to-return-http-status-code-along-with-the-response

https://www.sohu.com/a/397676588_495675

2021/07/16

今天在把raspberry 4 接到電視時

碰到聲音無法從hdmi輸出的問題

找了很多文件

試了三個OS版本

最後是使用 linux mate 才處理好

因為在音訊設定上可以直接選擇hdmi輸出

但選完後每次重開几就要再重新手動設定一次 有點麻煩

為了解決這個問題

在手動設定好後

執行以下指定


pactl list short sinks


找出目前使用的是那個輸出

找到後

接下來再開几啟動的設定加上一個開几指定音訊輸出的指令


pactl set-default-sink alsa_output.platform-bcm2835_audio.stereo-fallback


這樣每次重開就會自動重新指定音訊到hdmi輸出了


https://www.youtube.com/watch?v=XFNeLzfGB-o

https://www.upressme.xyz/how-to-fix-sound-in-ubuntu/

2021/07/14

今天利用API試著從 twse 撈出資料

curl https://mis.twse.com.tw/stock/api/getStockInfo.jsp?ex_ch=tse_0056.tw|jq


{

  "msgArray": [

    {

      "tv": "-",

      "ps": "-",

      "nu": "http://www.yuantaetfs.com/#/RtNav/Index",

      "pz": "-",

      "bp": "0",

      "a": "34.3800_34.3900_34.4000_34.4100_34.4200_",

      "b": "34.3700_34.3600_34.3500_34.3400_34.3300_",

      "c": "0056",

      "d": "20210714",

      "ch": "0056.tw",

      "tlong": "1626229079000",

      "f": "446_1402_1414_406_48_",

      "ip": "0",

      "g": "70_209_677_95_78_",

      "mt": "848857",

      "h": "34.9000",

      "it": "02",

      "l": "34.3500",

      "n": "元大高股息",

      "o": "34.8900",

      "p": "0",

      "ex": "tse",

      "s": "-",

      "t": "10:17:59",

      "u": "38.3300",

      "v": "14561",

      "w": "31.3700",

      "nf": "元大臺灣高股息證券投資信託基金",

      "y": "34.8500",

      "z": "-",

      "ts": "0"

    }

  ],

  "referer": "",

  "userDelay": 5000,

  "rtcode": "0000",

  "queryTime": {

    "sysDate": "20210714",

    "stockInfoItem": 901,

    "stockInfo": 188897,

    "sessionStr": "UserSession",

    "sysTime": "10:18:03",

    "showChart": false,

    "sessionFromTime": -1,

    "sessionLatestTime": -1

  },

  "rtmessage": "OK",

  "exKey": "if_tse_0056.tw_zh-tw.null",

  "cachedAlive": 82099

}

如果要取得msgArray裡的資料

指令如下

curl https://mis.twse.com.tw/stock/api/getStockInfo.jsp?ex_ch=tse_00882.tw|jq '.msgArray'


[

  {

    "tv": "-",

    "ps": "-",

    "nu": "https://www.ctbcinvestments.com/Product/ETFBusiness",

    "pz": "-",

    "bp": "0",

    "a": "15.4100_15.4200_15.4300_15.4400_15.4500_",

    "b": "15.4000_15.3900_15.3800_15.3700_15.3600_",

    "c": "00882",

    "d": "20210714",

    "ch": "00882.tw",

    "tlong": "1626229529000",

    "f": "1005_1039_946_454_514_",

    "ip": "0",

    "g": "3086_873_1726_575_596_",

    "mt": "353051",

    "h": "15.4300",

    "it": "02",

    "l": "15.4000",

    "n": "中信中國高股息",

    "o": "15.4300",

    "p": "0",

    "ex": "tse",

    "s": "-",

    "t": "10:25:29",

    "u": "9999.9500",

    "v": "17761",

    "nf": "中國信託全球收益ETF傘型證券投資信託基金之中國信託恒生中國高股息ETF證券投資信託基金",

    "y": "15.4500",

    "z": "-",

    "ts": "0"

  }

]


再來要報得目前的即時報價 在 "a" 這個欄位 這裡花了一點時間試

因為上一步的資料裡多了中括號  [ ]

所以指令要改成如下

curl https://mis.twse.com.tw/stock/api/getStockInfo.jsp?ex_ch=tse_0056.tw|jq '.msgArray[].a'


"34.3700_34.3800_34.3900_34.4000_34.4100_"

第一個分隔就是目前報價


https://zys-notes.blogspot.com/2020/01/api.html

2021/06/21

之前一直都是使用nc來把crystal diskinfo收集到的資料傳到syslog

但nc 一直有個問題就是會被大多數的防毒軟体判斷為惡意程式

今天又要crystal diskinfo 更新

想說再來找看看有沒有其他的替代工具

結果發現了  swiss file knife

超強的

重點是不會被防毒FP

記一下把檔案傳到 syslog的語法

type diskinfo.txt | sfk.exe tonetlog 10.0.0.1:514

其他的功能有用到再來看

發現一個問題

sfk tonetlog 最多只能傳送 1500 byte

所以diskinfo.txt會被截斷

先改回來nc


http://stahlworks.com/dev/swiss-file-knife.html

https://sourceforge.net/projects/swissfileknife/

http://stahlworks.com/dev/index.php?tool=netlog

http://netcat.sourceforge.net/

2021/06/13

今天看了一下久未關心的hfish

發現已經到 2.3.0 版了

不管是介面或是架構都有了滿大的變化

但試了之後發現一個超級嚴重的問題

就是client只有在部屬第一次的時候能使用

重開後如果再執行 ./client

client端運作沒問題

可是卻會導致server端的節點控制上出現嚴重錯誤

進入完全無法使用的慘況

看來只能等下一版看看有沒有修正了


 https://hfish.io/index.html

2021/06/06

安裝jitsi的流程記錄一下 ubuntu 20.04

os安裝好後

apt update

apt upgrade -y

在DNS上設定好server的name

接下來 

apt install curl gnupg

curl https://download.jitsi.org/jitsi-key.gpg.key | sudo sh -c 'gpg --dearmor > /usr/share/keyrings/jitsi-keyring.gpg'

echo 'deb [signed-by=/usr/share/keyrings/jitsi-keyring.gpg] https://download.jitsi.org stable/' | sudo tee /etc/apt/sources.list.d/jitsi-stable.list > /dev/null

sudo apt-get -y update

sudo apt-get -y install jitsi-meet

如果要使用letsencrypt

apt install certbot

/usr/share/jitsi-meet/scripts/install-letsencrypt-cert.sh

重啟nginx
到 https://servername
就可以使用了

letsencrypt要記得每三個月renew一次
或使用crontab 
1 1 * * 6 /usr/bin/certbot renew

https://kafeiou.pw/2020/06/19/2489/
https://campus-xoops.tn.edu.tw/modules/tad_book3/page.php?tbdsn=1557

架完jitsi後

預設是完全開放

因為不可能使用ip來管制

所以想用ldap

找了很多文件

發現nginx可以利用轉導的方式來認証

就不用再有一堆安裝及設定


server {

    listen 443 ssl;

    listen [::]:443 ssl;

    server_name meet;

location / {

                auth_request /auth;

                try_files $uri $uri/ =404;

        }

        location = /auth {

                proxy_pass http://10.0.0.1/auth/;

                proxy_pass_request_body off;

                proxy_set_header Content-Length "";

                proxy_set_header X-Original-URI $request_uri;

        }

}


藍字部分加在原來的nginx config裡

紅字部分則是視需要轉導到原來已設定好ldap認証的server及路徑

改好後重啟 nginx 

https://techexpert.tips/zh-hans/nginx-zh-hans/nginx-%E6%B4%BB%E5%8A%A8%E7%9B%AE%E5%BD%95%E4%B8%8A%E7%9A%84-ldap-%E8%BA%AB%E4%BB%BD%E9%AA%8C%E8%AF%81/

2021/05/31

今天打開librenms的時候發現無法進入

出現錯誤訊息






查了一下log 發現  5/30 03:00左右就出問題了

重開後正常

但跑了一下valid.php出現以下錯誤


./validate.php

====================================

Component | Version

--------- | -------

LibreNMS  | 21.5.1-16-g15da7fa

DB Schema | 2020_12_14_091314_create_port_group_port_table (205)

PHP       | 7.4.19

Python    | 3.6.8

MySQL     | 5.5.68-MariaDB

RRDTool   | 1.4.8

SNMP      | NET-SNMP 5.7.2

====================================


[OK]    Composer Version: 2.0.14

[OK]    Dependencies up-to-date.

[OK]    Database connection successful

[FAIL]  MariaDB version 10.2.2 is the minimum supported version as of March, 2021. Update MariaDB to a supported version 10.5 suggested).

[FAIL]  Your database is out of date!

        [FIX]:

        ./lnms migrate

[WARN]  Global lnms shortcut not installed. lnms command must be run with full path

        [FIX]:

        sudo ln -s /opt/librenms/lnms /usr/bin/lnms

[WARN]  Bash completion not installed. lnms command tab completion unavailable.

        [FIX]:

        sudo cp /opt/librenms/misc/lnms-completion.bash /etc/bash_completion.d/

[WARN]  Log rotation not enabled, could cause disk space issues

        [FIX]:

        sudo cp /opt/librenms/misc/librenms.logrotate /etc/logrotate.d/librenms

[WARN]  Your install is over 24 hours out of date, last update: Sat, 29 May 2021 14:08:28 +0000

        [FIX]:

        Make sure your daily.sh cron is running and run ./daily.sh by hand to see if there are any errors.

[FAIL]  We have found some files that are owned by a different user than 'librenms', this will stop you updating automatically and / or rrd files being updated causing graphs to fail.

        [FIX]:

        sudo chown -R librenms:librenms /opt/librenms

        sudo setfacl -d -m g::rwx /opt/librenms/rrd /opt/librenms/logs /opt/librenms/bootstrap/cache/ /opt/librenms/storage/

        sudo chmod -R ug=rwX /opt/librenms/rrd /opt/librenms/logs /opt/librenms/bootstrap/cache/ /opt/librenms/storage/

        Files:

         /opt/librenms/config.php_20201107

         /opt/librenms/html/plugins/Weathermap/nkhc.png

         /opt/librenms/html/plugins/Weathermap/nkhc.html



除了mariadb升級外
其他先手動做完沒問題

想說     ./lnms migrate 應該會跟db有關
所以先升db 好了

先備db

    $ mysqldump -u root -p --all-database > mysql-backup.sql

備config

# cp /etc/my.cnf /etc/my.cnf.bak

加入mariadb的repo

# vi /etc/yum.repos.d/MariaDB.repo

加入以下內容:
[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.5/centos7-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1

停止及移除目前的 MariaDB:


    # systemctl stop mariadb
    # yum remove mariadb mariadb-server

現在開始安裝 MariaDB 10.5,

    # yum install mariadb mariadb-server

啟動 MariaDB 及設定開機自動啟動:

    # systemctl enable  mariadb
    # systemctl start  mariadb


將原來的 MariaDB 資料升級:

    # mysql_upgrade -u root -p

升級過程沒出現什麼問題
再來跑一下

        ./lnms migrate

出現以下錯誤

Migrating: 2020_12_14_091314_create_port_groups_table

In Connection.php line 678:

SQLSTATE[42S01]: Base table or view already exists: 1050 Table ‘port_groups’ already exists (SQL: create table port_groups (id int unsi
gned not null auto_increment primary key, name varchar(255) not null, desc varchar(255) null) default character set utf8mb4 collate ‘ut
f8mb4_unicode_ci’)

In Exception.php line 18:

SQLSTATE[42S01]: Base table or view already exists: 1050 Table ‘port_groups’ already exists

In PDOStatement.php line 112:

SQLSTATE[42S01]: Base table or view already exists: 1050 Table ‘port_groups’ already exists


進DB先把 port_groups drop 掉

再跑一次

        ./lnms migrate

就正常了

最後再跑一次

./valid.php

沒有任何問題

後續觀察中


https://www.opencli.com/mysql/rhel-centos7-upgrade-mariadb-to-10-5 

https://community.librenms.org/t/validation-gives-failure-to-update-mariadb-will-eventually-cause-issues-lnms-migrate-gives-error/15391

2021/05/14

之前的直播都是直接上youtube

今天有個需求 希望能夠不要使用youtube

而且user要能直接用browser看 不需要 vlc

因為之前上看校的直播使用的是 rtmp

查了一下資料

如果要在browser用rtmp

使用的播放程式都必需轉成flash

問題是現在沒有browser支援flash了

所以想到另一個方式

先把串流轉成hls (m3u8)的格式

在srs.conf 裡進行如下的修改

vhost __defaultVhost__ {

   hls {

        enabled         on;

        hls_path        /usr/local/srs/objs/nginx/html/; (此行依現況調整)

        hls_fragment    10;

        hls_window      60;

    }

}

改完重啟 並進行串流後

會在html生成live目錄及相關的m3u8文件


在client端目前看來只有 safari 能直接在html5 使用 <video> 播放

其他browser都需要再呼叫播放器

參考以下範例


<html>

<head>

    <link href="https://vjs.zencdn.net/7.4.1/video-js.css" rel="stylesheet">

</head>

 

<body>

    <video id='my-video' class='video-js' controls preload='auto' width='800' height='600' poster='avatar-poster.jpg'

        data-setup='{ "html5" : { "nativeTextTracks" : true } }'>

        <source src='http://1.2.3.4:8080/live/livestream.m3u8' type="application/x-mpegURL">

        <p class='vjs-no-js'>

            To view this video please enable JavaScript, and consider upgrading to a web browser that

            <a href='https://videojs.com/html5-video-support/' target='_blank'>supports HTML5 video</a>

        </p>

    </video>

 

    <script src='https://vjs.zencdn.net/7.4.1/video.js'></script>

    <script src="https://cdnjs.cloudflare.com/ajax/libs/videojs-contrib-hls/5.15.0/videojs-contrib-hls.min.js"></script>

 

    <script>

        var player = videojs('my-video');

        player.play();

    </script>

 

</body>

</html>


如果不使用直播而是想播放檔案

就要先把 mp4 轉成 hls

指令如下


ffmpeg -i video.mp4 -codec: copy -start_number 0 -hls_time 15 -hls_list_size 0 -f hls vido.m3u8


接下來再依照上述的html範例進行修改 


https://www.itbkz.com/9372.html


https://caniuse.com/http-live-streaming


https://blog.csdn.net/weixin_40592935/article/details/109361642

2021/04/24

本來是想在synology直接使用loki的docker

但碰到的問題是docker如果升級時資料會遺失

看來這是所有docker的問題

除非可以在docker裡直接升級而不必重新下載

目前就直接安裝一台 oracle linux 8 直接跑執行檔

記得 loki-local-config.yaml 有些參數要改

因為資料預設是放在 /tmp

我是直接 mkdir  /loki 然後改成如下


auth_enabled: false


server:

  http_listen_port: 3100

  grpc_listen_port: 9096


ingester:

  wal:

    enabled: true

    dir: /loki/wal

  lifecycler:

    address: 127.0.0.1

    ring:

      kvstore:

        store: inmemory

      replication_factor: 1

    final_sleep: 0s

  chunk_idle_period: 1h       # Any chunk not receiving new logs in this time will be flushed

  max_chunk_age: 1h           # All chunks will be flushed when they hit this age, default is 1h

  chunk_target_size: 1048576  # Loki will attempt to build chunks up to 1.5MB, flushing first if chunk_idle_period or max_chunk_age is reached first

  chunk_retain_period: 30s    # Must be greater than index read cache TTL if using an index cache (Default index read cache TTL is 5m)

  max_transfer_retries: 0     # Chunk transfers disabled


schema_config:

  configs:

    - from: 2020-10-24

      store: boltdb-shipper

      object_store: filesystem

      schema: v11

      index:

        prefix: index_

        period: 24h


storage_config:

  boltdb_shipper:

    active_index_directory: /loki/boltdb-shipper-active

    cache_location: /loki/boltdb-shipper-cache

    cache_ttl: 24h         # Can be increased for faster performance over longer query periods, uses more disk space

    shared_store: filesystem

  filesystem:

    directory: /loki/chunks


compactor:

  working_directory: /loki/boltdb-shipper-compactor

  shared_store: filesystem


limits_config:

  reject_old_samples: true

  reject_old_samples_max_age: 168h


chunk_store_config:

  max_look_back_period: 0s


table_manager:

  retention_deletes_enabled: false

  retention_period: 0s


ruler:

  storage:

    type: local

    local:

      directory: /loki/rules

  rule_path: /loki/rules-temp

  alertmanager_url: http://localhost:9093

  ring:

    kvstore:

      store: inmemory

  enable_api: true


2021/04/23

最近在試grafana的loki

這個專案已經釋出有一段時間了

只是一直沒去試

這几天試了一下

發現還不錯

如果沒有太複雜的需求

是一個不錯的選擇


稍微簡單說一下

目前在github上可以看到相關的程式有4個


logcli

command line 搜尋工具

如果不想用這個工具 也可以直接使用curl


curl -G -s "http://10.0.0.1:3100/loki/api/v1/query_range" --data-urlencode 'query={job="abc"}' --data-urlencode 'step=3000'|jq


loki-canary

效能檢視工具


loki

主程式


promtail

把資料拋回loki的client程式


最簡單的方式就是直接執行

loki 跟 promtail這二支程式

完全不需要安裝

在執行前注意要先把二個yaml檔定義好

loki 如果使用預設的yaml 要記得修改資料存放的路徑 預設是放在 /tmp

promtail預設收log file的路徑要記得改 如果有多個檔案要收

記得要增加如藍色的那段


scrape_configs:

- job_name: system

  static_configs:

  - targets:

      - localhost

    labels:

      job: varlogs

      __path__: /var/log/*

  - targets:

      - localhost

    labels:

      job: nginxlogs

      __path__: /var/log/nginx/*


如果想使用網頁介面進行搜尋

就可以在grafana進行設定


https://grafana.com/docs/loki/latest/overview/

https://www.cnblogs.com/fsckzy/p/13231696.html

https://my.oschina.net/xiaominmin/blog/3300964

2021/03/19

以往如果需要解壓後執行批次檔時

都是先壓成7z再利用7zsfx二次處理

今天發現 bandizip 可以直接壓成自解解檔達成以上的需求

但有個問題就是自解檔執行後會跳出詢問視窗 而且預設無法關掉 

找了一下forum

在執行時加上 /auto就可以解決這個問題

例如

abc.exe /auto


2024/3/3 後記

可以用批次檔處理這個問題

run.bat 內容如下

curl -o %tmp%\abc.exe http://10.0.0.1/abc.exe

start %tmp%\abc.exe /auto



https://www.azofreeware.com/2012/07/7-zip-sfx-maker-32-7z.html

Bandizip

https://groups.google.com/g/bandizip-win/c/tS9KLKh45O8/m/spdej7MYAwAJ

2021/02/24

今天更新完proxmox 的kernel重開後

windows guest的ip全部變成自動取得

改ip時會自動帶入gateway

可是第一次存檔後 gateway又會消失

要再設定一次

不知道是什麼bug


2021/02/22

最近google 發布消息

將從 2022/7 開始取消 學術單位硬碟的無限空間

搞得所有人 人仰馬翻

想說要怎麼搬資料

好在還有 one drive 1T 可以用

利用一直都在使用的 rclone  來處理了

這是免安裝純文字介面的工具

如果對文字介面有恐懼症的

可以試試

raidrive  

不過免費版本有一些限制就是了

首先先到 https://rclone.org/ 下載檔案後解壓縮

第一次使用先執行 rclone config

建立新的 remote 

要注意就算之前有建立過 時間久了憑証就會失效

要刪除再重新建立

建立過程請參考下方連結

https://zhuanlan.zhihu.com/p/139200172


再來列出几個常使用的指令範例

gd  google drive

od one drive


把本机端的資料上傳到雲端上 並把過程進行記錄

./rclone -v copy ~/abc gd:/abc --log-file /tmp/0928.log


把雲端的資料下載到本地端 並把過程進行記錄

./rclone -v copy gd:/abc /tmp/abc --log-file /tmp/0928.log


把 gd 雲端的資料傳到 雲端 od      資料會先下載到本機再上傳

./rclone -v copy gd:/abc od:/abc --log-file /tmp/0928.log


如果需要在win 上mount 成一個磁碟機

要先安裝 winsfp

https://github.com/billziss-gh/winfsp/releases

再來打開dos 視窗 (絶對不能使用系統管理員模式)

建立一個cache 目錄

mkdir c:/tmp

執行掛載指令

rclone mount gd:/  z: --cache-dir c:\tmp

執行後dos 視窗不能關

不使用關掉dos視窗就會unmount 了