日常学习

java shell docker

September 15, 2020

A Little Book on Java 的总结

Basic

  1. 编译 与 运行
    编译: javac First.java 产生一个 First.class 文件
    运行:java First 将运行 编译之后 First.class
    • java 编译器 将 源代码 中的每个 class 转变为 对应的 class file 并存储 其 字节码
    • 有 main 函数的 class才能够运行,一个项目中存在多个class 有 main 函数 是为了 将 项目分为不同的 可以运行的单元 方便测试
  2. 基本类型
    • Number, float, double, int
    • Character: char a = ’a’;
    • Boolean: boolean true and false
    • Strings: String title = “A Little Book on Java”;
    • Array: datatype[] ArrayName = new datatype[ArraySize]; 当 使用 index 超过 数组边界 时 会发生 ArrayIndexOutOfBoundsException 错误
  3. 流程控制语句
    • while loop
  while <boolean-expression>
    statement
  1. 抽象机制
  2. Procedures
    • its name,
    • what kinds of parameters it expects (if any),
    • what kind of result it might return.
  3. class
    • Syntax of Class Declarations
  class Hello{
  }
  class Foo {
    public static void main(String args[]){
        /* Body of main */
    }
  }
  class Foo {
    static int name;
    public static void showFoo(){
    }
  }

  classname.methodname(parameters); // static method usage
  classname.variablename;           // static variable usage
  1. The Object Concept
  class PointIn3D{
    //Instance Variables
    private double x;
    private double y;
    private double z;
  }
  class PointIn3D{
    private double x;
    private double y;
    private double z;

     //Constructors
    //This constructor does not take parameters
    public PointIn3D(){
      /* Initializing the fields of this object to the origin,
         a default point */
      x = 0;
      y = 0;
      z = 0;
    }

    //This constructor takes parameters
    public PointIn3D(double X, double Y, double Z){
      /* Initializing fields of this object to values specified by
         the parameters */
      x = X;
      y = Y;
      z = Z;
    }
  }
  //Creates a PointIn3D object with coordinates (0, 0, 0)
  new PointIn3D();
  //Creates a PointIn3D object with coordinates (10.2, 78, 1) new PointIn3D(10.2, 78, 1);
   ReferenceType ReferenceName;

   PointIn3D p = new PointIn3D(1, 1, 1);
   ReferenceName.FieldName;

  public PointIn3D(){
    this.x = 0;
    this.y = 0;
    this.z = 0;
  }
  public double getX(){
    return this.x;
  }

  1. Rules for Method Lookup and Type Checking.
    • First the rules. Remember that there are two phases: compile time, which is when type checking is done and run time, which is when method lookup happens. Compile time is before run time.
    • The type checker has to say that a method call is OK at compile time.
    • All type checking is done based on what the declared type of a reference to an object is.
    • Subtyping is an integral part of type checking. This means if B is a subtype of A and there is a context that gets a B where A was expected there will not be a type error.
    • Method lookup is based on actual type of the object and not the declared type of the reference.
    • When there is overloading (as opposed to overriding) this is resolved by type-checking.
  class myInt {
      private int n;
      public myInt(int n){
          this.n = n;
      }
      public int getval(){
          return n;
      }
      public void increment(int n){
          this.n += n;
      }
      public myInt add(myInt N){
          return new myInt(this.n + N.getval());
      }
      public void show(){
          System.out.println(n);
      }
  }

  class gaussInt extends myInt {
      private int m;  //represents the imaginary part
      public gaussInt(int x, int y){
          super(x);
          this.m = y;
      }
      public void show(){
          System.out.println("realpart is: " + this.getval() +" imagpart is: " + m);
      }
      public int realpart() {
          return getval()
              ;}
      public int imagpart() {
          return m;
      }
      public gaussInt add(gaussInt z){
          return new gaussInt(z.realpart() + realpart(),
                              z.imagpart() + imagpart());
      }
      public static void main(String[] args){
          gaussInt kreimhilde = new gaussInt(3,4);
          kreimhilde.show();
          kreimhilde.increment(2);
          kreimhilde.show();
          System.out.println("Now we watch the subtleties of overloading.");
          myInt a = new myInt(3);
          gaussInt z = new gaussInt(3,4);
          gaussInt w;
          myInt b = z;
          myInt d = b.add(b); //this does type System.out.print("the value of d is:

          // 这里面并没有错误, add 方法为 重载,而非重写,因为 方法签名不同。 同样会通过type check
          // w = z.add(b);
          // w = b.add(z);
          w = ( (gaussInt) b).add(z);//this does type check System.out.print("the value of w is: ");
          w.show();
          myInt c = z.add(a); //will this typecheck? System.out.print("the value of c is: ");
          c.show();
      }
  }

  1. The Exception Object
    • 分为两类: unchecked exceptions and checked exceptions.
    • 所有的exception 都发生在 runtime, 因为不是的话,要啥编译检查?
    • Unchecked exceptions 与 checked exception 的区别主要在于: Unchecked exceptions happen because of the programmer’s carelessness,也就是说 unchecked exception 是可以预防的,可以避免的。两个主要的 unchecked exception 主要有: rrayIndexOutofBoundsException and NullPointerException
    • 所有其他的非 unchecked exception 则是:checked exceptions, 两个主要的exception 有 FileNotFoundException and IOException.
  2. 创建 新的 exception
    • 新创建的 exception 应该继承 exception 或者 任何 除 RunTimeException 之外的 子类。 因为 新创建的 exception 为 checked exception
    • An exception is thrown to indicate the occurrence of a runtime error. Only checked exceptions should be thrown, as all unchecked exceptions should be eliminated. 意思是: 只有 checked exceptions 需要throw 声明, unchecked exception 因为无法预测,只能 尽量消除掉。(If a method’s header does not contain a throws clause, then the method throws no checked exceptions.)
  3. Throwing an Exception
      public static void main(String[] args) throws IOException,
                                              FileNotFoundException
    
    • A method’s header advertises the checked exceptions that may occur when the method executes
    • An exception can occur in two ways: explicitly through the use of a throw statement or implicitly by calling a method that can throw an exception 意思是:异常产生有两种方式:1. 直接抛出异常 2. 调用 能够抛出异常的函数
  4. Catching an Exception: catch 异常的方式同其他 语言一致, 即是 不断的递归的 解开栈,以找到合适的 catch。如果无法找到适合的 catch 则 使用默认的 default exception handler 来捕获异常,所以default exception handler 是在哪一层?main 层面吗?
      try{
         code that could cause exceptions
      }
      catch (Exception e1){
         code that does something about exception e1
      }
      catch (Exception e2){
         code that does something about exception e2
      }
    

A Little Book on Shell

常用 command

  1. file cp mv mkdir rm ln
    其中ln 命令 ln file link, 默认 创建 hard link, ln -s file link 才 为 soft link, soft link 同样增加 file 的link count
  2. Working with Commands (type which help man apropos info whatis alias)
| command | meaning                                           |
|---------|---------------------------------------------------|
| type    | Indicate how a command name is interpreted        |
| which   | Display which executable program will be executed |
| help    | Get help for shell builtins                       |
| man     | Display a command's manual page                   |
| apropos | Display a list of appropriate commands            |
| info    | Display a command's info entry                    |
| whatis  | Display one-line manual page descriptions         |
| alias   | Create an alias for a command                     |
  1. commands 的来源:
    * An executable program: 例如 /usr/bin 下面的 可执行文件,
    * A command built into the shell itself.: bash 支持的内建 的 命令
    * A shell function: shell 函数 Shell functions are miniature shell scripts incorporated into the environment
    * An alias: Aliases are commands that we can define ourselves, built from other commands.

  2. man 详细内容: Display a Program’s Manual Page。 手册内容 被分为 几个 章节, 出了 使用 man command, 之外 可以使用 man 1 command 来显示 User commands 章节

| section | contents                                       |
|---------|------------------------------------------------|
| 1       | User commands                                  |
| 2       | Programming interfaces for kernel system calls |
| 3       | Programming interfaces to the C library        |
| 4       | Special files such as device nodes and drivers |
| 5       | File formats                                   |
| 6       | Games and amusements such as screen savers     |
| 7       | Miscellaneous                                  |
| 8       | System administration commands                                               |
  1. apropos – Display Appropriate Commands 展示相关的 命令。通过 apropos ls 可以获得 lscpu, lshw, 等一系列 命令
  2. whatis – Display One-line Manual Page Descriptions: 展示一行关于 command的简单描述
  3. info 另一种展现形式的 man
  4. alias: alias name=’string’ 来构建 名为 name 的command line, type name 可以获得 name 对应的 具体string 内容

Redirection

  1. cat sort uniq grep wc head tail tee(Read from standard input and write to standard output and files)
  2. command line 数据流 有: 标准输入 标准输出 标准错误输出,即: stdin, stdout, stderr, 0, 1, 2
  3. 重定向 stdout, 使用 > 来将 输出 重定向到 file 中,file中内容将被覆盖。  » 将 数据重定向 到file中,不覆盖 追加到 file 末尾中
  4. 重定向 stderr, 类似 重定向 stdout 使用 2>, 2» 进行 标准错误输出 的数据重定向
  5. 将stdout & stderr 重定向 到一个 file 中:
    • ls -l /bin/usr > ls-output.txt 2>&1 , 注意 其中的 2>&1 的写法,以及, > 与 2>&1 的顺序, 其中原因,为shell 语法需要 控制 两次重定向 打开的是同一个文件
    • ls -l /bin/usr &> ls-output.txt 也可以这样 &> 代表 stdout stderr, ls -l /bin/usr &» ls-output.txt 则代表 将stdout stderr 数据流 追加到 文件中
  6. Disposing of Unwanted Output: ls -l /bin/usr 2> /dev/null 将 数据流 重定向 到 /dev/null 则可以起到忽略 数据流的作用
  7. 重定向 stdin, 使用 < 来重定向 stdin 从 键盘 到 file 上, 但是并不是特别有用(很少用到)
  8. Pipelines: 使用 pipe operator 将 一个command 的标准输出 输送 到 一个command 的标准输入中。 command1 command2
  9. Pipelines 与 重定向的 区别: 重定向只能 定向到 file, 而 pipelines 则可以 重定向到 一个command

Seeing the World as the Shell Sees It

  1. 扩展 Expansion: how a simple character sequence, for example *, can have a lot of meaning to the shell. The process that makes this happen is called expansion. With expansion, we enter some- thing and it is expanded into something else before the shell acts upon it. 也就是 说 在 传递 参数给 command, command 接收参数处理前,会被 进行处理,该处理过程 即是: expansion。
  2. echo 是如何 显式化 看到 expansion 结果的 重要方式
  3. Pathname Expansion (路径扩展): 如下释义:
  [me@linuxbox ~]$ ls
  Desktop ls-output.txt Pictures Templates Documents Music Public Videos

  [me@linuxbox ~]$ echo D*
  Desktop Documents

  [me@linuxbox ~]$ echo *s
  Documents Pictures Templates Videos

  [me@linuxbox ~]$ echo [[:upper:]]*
  Desktop Documents Music Pictures Public Templates Videos

  [me@linuxbox ~]$ echo /usr/*/share
  /usr/kerberos/share /usr/local/share

  1. Arithmetic Expansion: $((expression)), expression 是 算术表达式, 操作数 只能是整数, 操作符 有 +, -, *, /, %, **
  [me@linuxbox ~]$ echo $(($((5**2)) * 3))
  1. Brace Expansion:
  [me@linuxbox ~]$ echo Front-{A,B,C}-Back
  Front-A-Back Front-B-Back Front-C-Back

  [me@linuxbox ~]$ echo Number_{1..5}
  Number_1 Number_2 Number_3 Number_4 Number_5

  [me@linuxbox ~]$ echo {01..15}
  01 02 03 04 05 06 07 08 09 10 11 12 13 14 15

  [me@linuxbox ~]$ echo {001..15}
  001 002 003 004 005 006 007 008 009 010 011 012 013 014 015

  [me@linuxbox ~]$ echo {Z..A}
  Z Y X W V U T S R Q P ON M L K J I H G F E D C B A


  [me@linuxbox ~]$ mkdir Photos
  [me@linuxbox ~]$ cd Photos
  [me@linuxbox Photos]$ mkdir {2007..2009}-{01..12} 
  [me@linuxbox Photos]$ ls
  2007-01 2007-07 2008-01 2008-07 2009-01 2009-07 2007-02 2007-08 2008-02 2008-08 2009-02 2009-08 2007-03 2007-09 2008-03 2008-09 2009-03 2009-09 2007-04 2007-10 2008-04 2008-10 2009-04 2009-10 2007-05 2007-11 2008-05 2008-11 2009-05 2009-11 2007-06 2007-12 2008-06 2008-12 2009-06 2009-12
  1. Parameter Expansion
  [me@linuxbox ~]$ echo $USER 
  me
  1. Command Substitution: 子命令, 允许在表达式中 执行子命令 并展开. $(command sub)
  [me@linuxbox ~]$ echo $(ls)
  Desktop Documents ls-output.txt Music Pictures Public Templates Videos
  1. Quoting: 可以用来控制 是否进行 扩展 展开。
    • 下面两个示例:
  [me@linuxbox ~]$ echo this is a    test
  this is a test

  [me@linuxbox ~]$ echo The total is $100.00
  The total is 00.00

注意 这两个 的存在的问题: 1. 第一个中 shell 将 params 中多余的空格 去掉了, 即是: ‘a test’中多余的空格, 因为 shell 将 通过 空格 分隔 参数,认为 a test 为两个参数。 2. $100.00 展开为了 00.00 是因为 $1 不存在的缘故

  [me@linuxbox ~]$ ls -l two words.txt
  ls: cannot access two: No such file or directory
  ls: cannot access words.txt: No such file or directory

  [me@linuxbox ~]$ ls -l "two words.txt"
  -rw-rw-r-- 1 me me 18 2016-02-20 13:03 two words.txt [me@linuxbox ~]$ mv "two words.txt" two_words.txt
  [me@linuxbox ~]$ echo this is a    test
  this is a test

  [me@linuxbox ~]$ echo "this is a   test"
  this is a   test




  (calvagrant@precise64:~$ echo $(cal)
  September 2020 Su Mo Tu We Th Fr Sa 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

  vagrant@precise64:~$ echo "$(cal)"
     September 2020
  Su Mo Tu We Th Fr Sa
         1  2  3  4  5
   6  7  8  9 10 11 12
  13 14 15 16 17 18 19
  20 21 22 23 24 25 26
  27 28 29 30

escape sequence meaning
\a Bell
\b Backspace
\n Newline
\r Carriage return
\t Tab
  1. Signals: Signals are one of several ways that the operating system communicates with programs
    • kill: The kill command doesn’t exactly “kill” processes: rather it sends them signals
      kill [-signal] PID…
keyboard signal
Ctrl-c INT
Ctrl-z TSTP
Number Name Meaning
1 HUP Hangup. This is a vestige of the good old days when terminals were attached to remote computers with phone lines and modems. The signal is used to indicate to programs that the controlling terminal has “hung up.” The effect of this signal can be demonstrated by closing a terminal session. The foreground program running on the terminal will be sent the signal and will terminate.
2 INT Interrupt. This performs the same function as a Ctrl-c sent from the terminal. It will usually terminate a program.
9 KILL Kill. This signal is special. Whereas programs may choose to handle signals sent to them in different ways, including ignoring them all together, the KILL signal is never actually sent to the target program. Rather, the kernel immediately terminates the process. When a process is terminated in this manner, it is given no opportunity to “clean up” after itself or save its work. For this reason, the KILL signal should be used only as a last resort when other termination signals fail.
15 TERM Terminate. This is the default signal sent by the kill command. If a program is still “alive” enough to receive signals, it will terminate.
18 CONT Continue. This will restore a process after a STOP or TSTP signal. This signal is sent by the bg and fg commands.
19 STOP Stop. This signal causes a process to pause without terminating. Like the KILL signal, it is not sent to the target process, and thus it cannot be ignored.
20 TSTP Terminal stop. This is the signal sent by the terminal when Ctrl-z is pressed. Unlike the STOP signal, the TSTP signal is received by the program, but the program may choose to ignore it.
3 QUIT Quit
11 SEGV Segmentation violation. This signal is sent if a program makes illegal use of memory, that is, if it tried to write somewhere it was not allowed to write.
28 WINCH Window change. This is the signal sent by the system when a window changes size. Some programs , such as top and less will respond to this signal by redrawing themselves to fit the new window dimensions.
  1. 命令查找: ls 命令的 定义在哪里, 又是如何找到的呢?
    • shell 从 PATH 变量中 包含的 Path 中 顺序查找
  PATH=$PATH:$HOME/bin
  export PATH

  简单的将 $HOME/bin 添加到 PATH 中 (注意 $HOME 会在此处求值)
  export PATH 让 shell之后的process 中的PATH都改变

查找文件

  1. locate : 非常简单有效,只能使用 filename 用来查找。 locate 足够高效 因为 其从 updatedb command 更新的数据库中来进行查找,updatedb 经常 放在cron job 来执行(需要确认下,因为没有找到 相关的配置文件)
  2. find 寻找文件 则显得 复杂 而详尽。可以根据给定的 目录 以及 各个限定 来查找文件。
    可选参数与 含义
参数 可选值
-type b: Block special device file; c: Character special device file; d: Directory; f: regular file; l Symbolic link
-size c Bytes; w: 2-byte words; k: kilobytes; M: megabytes; G: Gigabytes;
-cmin n Match files or directories whose content or attributes were last modified exactly n minutes ago. To specify less than n minutes ago, use -n, and to specify more than n minutes ago, use +n.
-cnewer file Match files or directories whose contents or attributes were last modified more recently than those of file.
-ctime n Match files or directories whose contents or attributes were last modified n*24 hours ago.
-empty Match empty files and directories.
-iname pattern Like the -name test but case-insensitive.
-inum n Match files with inode number n. This is helpful for finding all the hard links to a particular inode.
-mmin n Match files or directories whose contents were last modified n minutes ago.
-mtime n Match files or directories whose contents were last modified n*24 hours ago.
-name pattern Match files and directories with the specified wildcard pattern.
-newer file Match files and directories whose contents were modified more recently than the specified file. This is useful when writing shell scripts that perform file backups. Each time you make a backup, update a file (such as a log) and then use find to determine which files have changed since the last update
-samefile name Similar to the -inum test. Match files that share the same inode number as file name
-user name Match files or directories belonging to user name. The user may be expressed by a username or by a numeric user ID.
[me@linuxbox ~]$ find ~ -type f -name "*.JPG" -size +1M | wc -l

注意 其中的 -name 参数需要添加 “” 来防止 pathname expansion, size: 则使用 +1M 表示大于 1M 的文件

( expression 1 ) -or ( expression 2 )
Action Meaning
-delete delete match file
-ls ls -dils match file
-print output full pathname of match file
-quit Quit once a match has been made
find ~ -type f -name 'foo*' -exec ls -l '{}' ';'
-rwxr-xr-x 1 me me 224 2007-10-29 18:44 /home/me/bin/foo 
-rw-r--r-- 1 me me 0 2016-09-19 12:53 /home/me/foo.txt

// 修改后
find ~ -type f -name 'foo*' -exec ls -l '{}' +
-rwxr-xr-x 1 me me 224 2007-10-29 18:44 /home/me/bin/foo 
-rw-r--r-- 1 me me 0 2016-09-19 12:53 /home/me/foo.txt

Archiving and Backup:

  1. compressor command: gzip bzip2 gzip options

    Option Long Option Desc
    -c –stdout Write output to standard output and keep the original files.
    -d –decompress Decompress This causes gzip act like gunzip
    -f –force force compress event if a compressed file already exists
    -l –list 应用 已压缩文件 展示 压缩信息
    -r –recursive 递归压缩目录下的文件(目录下的文件各自压缩为 各自的压缩文件,所以 依然需要archive 程序)
    -v –verbose Display verbose messages while compressing.
    -number   Set amount of compression. number is an integer in the range of 1 (fastest, least compression) to 9 (slowest, most compression). The values 1 and 9 may also be expressed as –fast and –best, respectively. The default value is 6.

    bzip2 同gzip 一样 为压缩程序,其中的参数 都大概相同,除了-r -number 外。 bunzip2 bzcat 用于解压缩。 bzip2recover 可以恢复受损的 压缩文件

  2. archive command: tar zip: Archiving is the process of gathering up many files and bundling them together into a single large file.

Mode Meaning
c Create an archive from a list of files and/or directories.
x Extract an archive.
r Append specified pathnames to the end of an archive
t List the content of an archive
  [me@linuxbox ~]$ gzip foo.txt
  [me@linuxbox ~]$ ls -l foo.*
  -rw-r--r-- 1 me me 3230 2018-10-14 07:15 foo.txt.gz

  [me@linuxbox ~]$ gzip -d foo.txt.gz

  [me@linuxbox ~]$ gunzip foo.txt
  [me@linuxbox ~]$ ls -l foo.*
  -rw-r--r-- 1 me me 15738 2018-10-14 07:15 foo.txt


  [me@linuxbox ~]$ bzip2 foo.txt
  [me@linuxbox ~]$ ls -l foo.txt.bz2
  -rw-r--r-- 1 me me 2792 2018-10-17 13:51 foo.txt.bz2
  [me@linuxbox ~]$ bunzip2 foo.txt.bz2

tar: 只能以 相对路径 archive 文件。unarchive 的时候 在 当前路径下 以相对路径 恢复文件。example

  [me@linuxbox ~]$ tar cf playground2.tar ~/playground

  [me@linuxbox ~]$ cd foo
  [me@linuxbox foo]$ tar xf ../playground2.tar
  [me@linuxbox foo]$ ls
  home playground

–wildcards 可以用来过滤掉 特定的 match 文件 \n find 经常用来 与 tar 配合进行 批量 archive

  find playground -name 'file-A' -exec tar rf playground.tar '{}' '+'

  find playground -name 'file-A' | tar cf - --files-from=- | gzip > playground.tgz

第二条命令比较 特殊,在其中 tar cf - –files-from=- 中, - 代表 标准 标准输入或者输出
tar 可以通过添加 z j 参数,直接 使用gzip bzip2 进行压缩, z: gzip .tgz, j: bzip2 .tbz

  find playground -name 'file-A' | tar czf playground.tgz -T -

通过网络进行 文件备份:

  ssh remote-sys 'tar cf - Documents' | tar xf -

zip, unzip: 的命令 比较详细,所以只列出简短 的示例:

  zip -r playground.zip playground // -r 是必须,这样才能 得到 playground 下的所有 archive
  unzip ../playground.zip // 不同与 tar, zip 使用unzip 来进行 unarchive
  unzip -l ../playground.zip
  1. sync command: rsync rsync options source destination
    where source and destination are one of the following:

注意: source destination 其中之一 必须 为 本地的文件, 远程 到 远程的 copy 是不被允许的。
示例:

  rsync -av source destination // -a 代表 archive mode, v verbose output
  rsync -av source/ destination
  // 两种方式不同的地方在于 后一种 只拷贝 source 中的内容到 destination, 而 第一种 则将source 目录也 拷贝到 destination 中.

  rsync -av --delete source/ destination   // delete 参数 为 完全拷贝, source 中删除掉的file 将在 destination 中删除掉。
  1. Using rsync Over a Network: 的两种方式
    • source 安装了 rsync 的机器 以及 destination 安装了 远程shell 程序, 如: ssh
    • destination 安装了 rsync server, rsync 可以配置为 daemon 模式 等待 sync 请求
  sudo rsync -av --delete --rsh=ssh /etc /home /usr/local remote-sys:/backup
  // 这里面 --rsh 指定为 ssh, 使 rsync 能够 使用ssh 来进行同步操作
  rsync -av –delete rsync://archive.linux.duke.edu/ fedora/linux/development/rawhide/Everything/x86_64/os/ fedora-devel

Text Processing

  [me@linuxbox ~]$ cat > foo.txt // ctrl-d结束输入
  [me@linuxbox ~]$ cat -A foo.txt // 其中 ^I 代表 tab, $ 代表 line末尾, 所以可以用此来 区分 tab 与 space
  [me@linuxbox ~]$ cat -nA foo.txt // n 显式 line number
  [me@linuxbox ~]$ du -s /usr/share/* | head
  252 /usr/share/aclocal
  96 /usr/share/acpi-support
  8 /usr/share/adduser
  196 /usr/share/alacarte 344 /usr/share/alsa
  8 /usr/share/alsa-base 12488 /usr/share/anthy
  8 /usr/share/apmd


  //  下面对结果进行排序 其中 -nr 将string作为number 处理并 翻转排序, 这里面之所有管用,是因为 第一列 为 数字, 即 默认按照第一列进行排序
  [me@linuxbox ~]$ du -s /usr/share/* | sort -nr | head
  509940 /usr/share/locale-langpack
  242660 /usr/share/doc
  197560 /usr/share/fonts
  179144 /usr/share/gnome
  146764 /usr/share/myspell
  144304 /usr/share/gimp
  135880 /usr/share/dict
  76508 /usr/share/icons
  68072 /usr/share/apps
  62844 /usr/share/foomatic

  // 如果是这样的又如何排序?
  [shaohua.li@10-11-112-3 ~]$ ls -l /usr/bin/ | head
  total 58404
  -rwxr-xr-x  1 root root     33408 Nov 10  2015 [
  -rwxr-xr-x  1 root root    106792 Nov 10  2015 a2p
  -rwxr-xr-x. 1 root root     14984 Aug 18  2010 acpi_listen
  -rwxr-xr-x. 1 root root     23488 Nov 11  2010 addftinfo
  -rwxr-xr-x  1 root root     24904 Jul 23  2015 addr2line
  -rwxr-xr-x. 1 root root      1786 Feb 21  2013 apropos
  -rwxr-xr-x  1 root root     56624 Jul 23  2015 ar
  -rwxr-xr-x  1 root root    328392 Jul 23  2015 as
  -rwxr-xr-x. 1 root root     10400 Sep 23  2011 attr

  // -k 5 使用 第 5 field 作为key 用作 排序使用
  [shaohua.li@10-11-112-3 ~]$ ls -l /usr/bin/ | sort -nr -k 5 | head
  -rwxr-xr-x  1 root root   3214440 Dec 12  2016 mysql
  -rwxr-xr-x  1 root root   3051080 Dec 12  2016 mysqlbinlog
  -rwxr-xr-x  1 root root   2998400 Dec 12  2016 mysqldump
  -rwxr-xr-x  1 root root   2948832 Dec 12  2016 mysqlslap
  -rwxr-xr-x  1 root root   2936680 Dec 12  2016 mysqladmin
  -rwxr-xr-x  1 root root   2935688 Dec 12  2016 mysqlcheck
  -rwxr-xr-x  1 root root   2933128 Dec 12  2016 mysqlimport
  -rwxr-xr-x  1 root root   2931712 Dec 12  2016 mysqlshow
  -rwxr-xr-x  1 root root   2814328 Dec 12  2016 my_print_defaults
  -rwxr-xr-x  1 root root   2811544 Dec 12  2016 mysql_waitpid


  // 下面是 比较复杂的示例
  root@precise64:~/shell_test#  cat distros.txt
  Fedora  5    03/20/2006
  Fedora  6    10/24/2006
  Fedora  7    05/31/2007
  Fedora  8    11/08/2007
  Fedora  9    05/13/2008
  Fedora  10   11/25/2008
  SUSE    10.1 05/11/2006
  SUSE    10.2 12/07/2006
  SUSE    10.3 10/04/2007
  SUSE    11.0 06/19/2008
  Ubuntu  6.06 06/01/2006
  Ubuntu  6.10 10/26/2006
  Ubuntu  7.04 04/19/2007
  Ubuntu  7.10 10/18/2007
  Ubuntu  8.04 04/24/2008
  Ubuntu  8.10 10/30/2008

  // 如何对 distros 按照其 发布的版本 以及 发布时间 进行排序呢?

  // 单纯的 按照 发布版本排序
  root@precise64:~/shell_test# sort distros.txt  -nrk 2
  SUSE    11.0 06/19/2008
  SUSE    10.3 10/04/2007
  SUSE    10.2 12/07/2006
  SUSE    10.1 05/11/2006
  Fedora  10   11/25/2008
  Fedora  9    05/13/2008
  Ubuntu  8.10 10/30/2008
  Ubuntu  8.04 04/24/2008
  Fedora  8    11/08/2007
  Ubuntu  7.10 10/18/2007
  Ubuntu  7.04 04/19/2007
  Fedora  7    05/31/2007
  Ubuntu  6.10 10/26/2006
  Ubuntu  6.06 06/01/2006
  Fedora  6    10/24/2006
  Fedora  5    03/20/2006

  // 综合排序, 使用多个key, 版本,以及版本号 进行综合排序
  root@precise64:~/shell_test# sort --key=1,1 --key=2n distros.txt
  Fedora  5    03/20/2006
  Fedora  6    10/24/2006
  Fedora  7    05/31/2007
  Fedora  8    11/08/2007
  Fedora  9    05/13/2008
  Fedora  10   11/25/2008
  SUSE    10.1 05/11/2006
  SUSE    10.2 12/07/2006
  SUSE    10.3 10/04/2007
  SUSE    11.0 06/19/2008
  Ubuntu  6.06 06/01/2006
  Ubuntu  6.10 10/26/2006
  Ubuntu  7.04 04/19/2007
  Ubuntu  7.10 10/18/2007
  Ubuntu  8.04 04/24/2008
  Ubuntu  8.10 10/30/2008

  // k 可以为 f[.c][opts]  可以指定 field 中的 c pos 用来比较
  root@precise64:~/shell_test# sort -k 3.7nbr -k 3.1nbr -k 3.4nbr distros.txt
  Fedora  10   11/25/2008
  Ubuntu  8.10 10/30/2008
  SUSE    11.0 06/19/2008
  Fedora  9    05/13/2008
  Ubuntu  8.04 04/24/2008
  Fedora  8    11/08/2007
  Ubuntu  7.10 10/18/2007
  SUSE    10.3 10/04/2007
  Fedora  7    05/31/2007
  Ubuntu  7.04 04/19/2007
  SUSE    10.2 12/07/2006
  Ubuntu  6.10 10/26/2006
  Fedora  6    10/24/2006
  Ubuntu  6.06 06/01/2006
  SUSE    10.1 05/11/2006
  Fedora  5    03/20/2006


  // --debug 选项 是比较有意思的,用来在 不知道 key 以及sort情况时候,用来展现 其内部sort key 的方式
  root@precise64:~/shell_test# cat /etc/passwd | sort -t ':' -k 7 --debug | head
  sort: using `en_US' sorting rules
  root:x:0:0:root:/root:/bin/bash
                        _________
  _______________________________
  vagrant:x:1000:1000:vagrant,,,:/home/vagrant:/bin/bash
                                               _________
  ______________________________________________________
  messagebus:x:102:105::/var/run/dbus:/bin/false
                                      __________
  ______________________________________________
  mysql:x:106:111:MySQL Server,,,:/nonexistent:/bin/false

shell 语法

  1. Variables and Constants
    • shell 中的变量 是 动态的,不需要预先声明 与类型指定(因为没有类型,可能都为字符串),对于 使用 未定义 未赋值 的变量 其 数值 为 空。 所以我们需要 注意自己的拼写错误,因为 shell可能会将其视为 新变量。
    • 常量: 规范 使用 全大写 命名 常量,以区分于 普通变量。 也可以使用 declare -r TITLE=”Page Title” 来进行声明
    • 赋值: variable=value shell 并不区分 value的类型, value 全部被视为 string, 注意= 左右没有空格
    • 变量数值引用 需要注意, 因为语法原因 可能 需要使用 {} 来避免 变量名与表达式 的歧义
  a=z # a 赋值为 string z
  b="a string"
  c="a string and $b" # c 

  d="$(ls -l foo.txt)" # value 为子命令结果
  e=$((5 * 7)) # 数值计算展开

  a=5 b="string" # 多个变量可以 同时声明

  filename="myfile"
  touch $filename
  mv "$filename" "$filename1"  # 这里面的本意是 更改myfile 文件为 myfile1,但是因为并没有 区分 $filename1 是变量还是表达式, 所以这里需要 使用新的形势 来进行区分
  mv: missing destination file operand after `myfile'
  Try `mv --help' for more information.

  mv "$filename" "${filename}1" # 使用 {} 来解决歧义

function define:

  function name {
      commands
      return
  }

  name() {
      commands
      return
  }

Flow Control:

  if commands; then 
    commands
  [elif commands; then 
    commands...]
  [else 
    commands]
  fi
  root@precise64:~/shell_test# ls -d /usr/bin/
  /usr/bin/
  root@precise64:~/shell_test# echo $?
  0
  root@precise64:~/shell_test# ls -d /bin/usr
  ls: cannot access /bin/usr: No such file or directory
  root@precise64:~/shell_test# echo $?
  2
  INT=5
  if ((INT == 0)); then
      echo "INT is zero"
  fi

read: read 用于 从标准输入中 读取数值。

Options Desc
-a array 将输入赋值(转化)给 数组
-e 使用 Readline 模式 处理输入
-i string 默认数值,在玩家仅仅按 enter的时候 有用
-p prompt 输入的 提示信息
-r Raw mode. Do not interpret backslash characters as escapes.
-s slient mode, 用于密码输入
-t seconds Timeout after seconds
-u fd 使用file 作为输入,而不是标准输入
  #!/bin/bash

  echo -n "please enter an integer -> "
  read int

  if [[ "$int" =~ ^-?[0-9]+$ ]]; then
      if (( int == 0 )); then
          echo "int is zero"
      else
          if (( int < 0)); then
              echo "$int is negative"
          else
              echo "$int is positive"
          fi
      fi
  fi

  #  read 多个var 测试, 与ruby array 复制差不多,
  # 即是:当多个 参数数目 > 变量数目 时 剩余的变量为空值,当 参数数目 < 变量数目时 最后后一个变量 存储多个数值

  #!/bin/bash
  # read-multiple: read multiple values from keyboard
  echo -n "Enter one or more values > "
  read var1 var2 var3 var4 var5
  echo "var1 = '$var1'"
  echo "var2 = '$var2'"
  echo "var3 = '$var3'"
  echo "var4 = '$var4'"
  echo "var5 = '$var5'"

  vagrant@precise64:/vagrant_data/shell_test$ ./read-multiple.sh
  Enter one or more values > 1 2 3 4 4 45 5
  var1 = '1'
  var2 = '2'
  var3 = '3'
  var4 = '4'
  var5 = '4 45 5'

  vagrant@precise64:/vagrant_data/shell_test$ ./read-multiple.sh
  Enter one or more values > 1
  var1 = '1'
  var2 = ''
  var3 = ''
  var4 = ''
  var5 = ''

  # read 不传递 var 时候,默认使用变量 REPLY
  #!/bin/bash
  # read-single: read multiple values into default variable
  echo -n "Enter one or more values > "
  read
  echo "REPLY = '$REPLY'"

  vagrant@precise64:/vagrant_data/shell_test$ ./read-single.sh
  Enter one or more values > 1
  REPLY = '1'



Flow Control: Looping with while/until

  while commands; do
      commands
  done


  count=1
  while [[ "$count" -le 5 ]];
      do echo "$count"
      count=$((count + 1))
  done
  echo "Finished."
  #!/bin/bash
  # until-count: display a series of numbers count=1
  until [[ "$count" -gt 5 ]]; do
      echo "$count"
      count=$((count + 1))
  done
  echo "Finished."
  #!/bin/bash
  # posit-param2: script to display all arguments count=1
  while [[ $# -gt 0 ]]; do
    echo "Argument $count = $1"
    count=$((count + 1))
    shift
  done
// fun_test.sh file
#!/bin/bash
# posit-params3: script to demonstrate $* and $@
print_params () {
    echo "\$1 = $1"
    echo "\$2 = $2"
    echo "\$3 = $3"
    echo "\$4 = $4"
}
pass_params () {
    echo -e "\n" '$*'; print_params $*
    echo -e "\n" '$*'; print_params "$*"
    echo -e "\n" '$@'; print_params $@
    echo -e "\n" '$@'; print_params "$@"
}
pass_params "word" "words with spaces"


// ./fun_test.sh 测试
root@precise64:/vagrant_data/shell_test# ./fun_test.sh

 $*
$1 = word
$2 = words
$3 = with
$4 = spaces

 $*
$1 = word words with spaces
$2 =
$3 =
$4 =

 $@
$1 = word
$2 = words
$3 = with
$4 = spaces

 $@
$1 = word
$2 = words with spaces
$3 =
$4 =

for loop:

for variable [in words]; do 
  commands
done
for (( expression1; expression2; expression3 )); do
  commands
done

for (( i=0; i<5; i=i+1 )); do
    echo $i
done
[me@linuxbox ~]$ for i in A B C D; do echo $i; done
A
B
C
D


for i in {A..D}; do echo $i; done
A
B
C
D

[me@linuxbox ~]$ for i in distros*.txt; do echo "$i"; done
distros-by-date.txt
distros-dates.txt
distros-key-names.txt
distros-key-vernums.txt
distros-names.txt
distros.txt
distros-vernums.txt
distros-versions.txt



# ./for_test.sh file
for i; do
    echo "i in ---------- ${i} \n"
done


# 可以使用如此的方式,循环打印 command line 参数
root@precise64:/vagrant_data/shell_test# ./for_test.sh  a b c d e f j
i in ---------- a \n
i in ---------- b \n
i in ---------- c \n
i in ---------- d \n
i in ---------- e \n
i in ---------- f \n
i in ---------- j \n

Strings and Numbers

expression meaning
${para:-word} para 为空 express result 为 word
${para:=word} para 为空 express & para result 为 word (position 参数不能够如此赋值)
${pars:?word} pars 为空 则exit,word被输出到 stderr
${para:+word} para不为空,则 expres 为 word
${!prefix*} 或者 ${!prefix@} 返回 以 prefix 为前缀的 变量名称
${#para} para length, 如果 para 为 @ 或者 * 则 展开为 command line params size
${para:offset} ${para:offset:length} 用于string 的片段截取,没有length时候,则直到末尾, para为 @时候, 则为 参数 从 offset开始 到结尾
${para#pattern} ${para##pattern} 将字符串remove pattern match的部分,结果为 剩下的部分, #pattern remove 最短的 match 部分, ## 则remove 最长的match
${para%pattern} ${para%%pattern} 同上,但是 remove 片段从string 的末尾开始,而非开头开始
${para/pattern/string} ${para//pattern/string} ${para/#pattern/string} ${para/%pattern/string} string 的查找替换操作,使用 string 替换 para中的 pattern matched,第一个只替换第一个, 第二个则全部替换, 第三个 则替换开头, 第四个只替换末尾
Format Result
${pars,,} 展开为 para 的 全部小写形式
${para,} 展开式 para 首字母 小写
${para^^} 展开为 para 的全部 大写形式
${para^} 展开为 para 首字母 大写形式

数字操作: $((expression)) 基本形式

Operator Desc    
+      
- *    
*      
/      
**      
%      
para = value      
para += value      
para -= value      
para *= value      
para /= value      
para %= value      
para ++      
para –      
++ para      
– para      
<=      
>=      
<      
>      
!=      
&&      
expre1?expre2:expre3      

Array: bash version2 才得到支持,在原先的shell中 并不支持 array

a[1]=foo
echo ${a[1]}

declare -a a
[me@linuxbox ~]$ animals=("a dog" "a cat" "a fish") 
[me@linuxbox ~]$ for i in ${animals[*]}; do echo $i; done 
a
dog
a
cat
a
fish

[me@linuxbox ~]$ for i in ${animals[@]}; do echo $i; done a
dog
a
cat
a
fish

[me@linuxbox ~]$ for i in "${animals[*]}"; do echo $i; done
a dog a cat a fish

[me@linuxbox ~]$ for i in "${animals[@]}"; do echo $i; done
a dog
a cat
a fish

# "${!array[*]}", "${!array[@]}"
[me@linuxbox ~]$ foo=([2]=a [4]=b [6]=c)

[me@linuxbox ~]$ for i in "${foo[@]}"; do echo $i; done 
a
b
c

# 展示数组 有value的 indexs
[me@linuxbox ~]$ for i in "${!foo[@]}"; do echo $i; done
2
4
6

#!/bin/bash
# array-sort: Sort an array a=(f e d c b a)
echo "Original array: ${a[@]}"
# 传统的数组排序方式,因为shell并不会构建复杂的 类型系统 来进行 数组函数排序
a_sorted=($(for i in "${a[@]}"; do echo $i; done | sort))
echo "Sorted array: ${a_sorted[@]}"
[me@linuxbox ~]$ foo=(a b c d e f)
[me@linuxbox ~]$ echo ${foo[@]}
a b c d e f
[me@linuxbox ~]$ unset foo
[me@linuxbox ~]$ echo ${foo[@]}

[me@linuxbox ~]$


[me@linuxbox ~]$ foo=(a b c d e f)
[me@linuxbox ~]$ echo ${foo[@]}
a b c d e f
[me@linuxbox ~]$ unset 'foo[2]'
[me@linuxbox ~]$ echo ${foo[@]}
a b d e f


[me@linuxbox ~]$ foo=(a b c d e f)
[me@linuxbox ~]$ foo=
[me@linuxbox ~]$ echo ${foo[@]}
b c d e f


[me@linuxbox ~]$ foo=(a b c d e f)
[me@linuxbox ~]$ echo ${foo[@]}
a b c d e f
[me@linuxbox ~]$ foo=A
[me@linuxbox ~]$ echo ${foo[@]}
A b c d e f

Group Commands and Subshells:

(ls -l; echo "Listing of foo.txt"; cat foo.txt) > output.txt
{ ls -l; echo "Listing of foo.txt"; cat foo.txt; } > output.txt

#等同于: 
ls -l > output.txt
echo "Listing of foo.txt" >> output.txt
cat foo.txt >> output.txt

Process Substitution: 区别于 group commands , Process sub 运行在 子进程 中,而group command 则运行在当前进程中, 所以从效率上来说 group command 要快于 process substitution, 该技术使得 子进程 中的输出 到当前进程中 进行处理。通常 将 子进程中的数据流 输出到当前 进程使用 read 处理。


#!/bin/bash
# pro-sub: demo of process substitution
while read attr links owner group size date time filename; do
done < <(ls -l | tail -n +2)

Traps: 处理 信号。trap argument signal [signal…] 其中 argument 为string, 例如:

trap "echo 'I am ignoring you.'" SIGINT SIGTERM

trap exit_on_signal_SIGINT SIGINT
trap exit_on_signal_SIGTERM SIGTERM

Docker 书籍

Docker 的结构: 客户端 + 服务器。 Docker 服务器 为一个守护进程,下层抽象 Docker 容器,与客户端配合 提供了 一个RESTful API 给 客户端。

概念: 镜像 与 容器。镜像是Docker世界的基石,类似于 面向对象中的 类, 所有容器 都是基于 镜像 运行的。也就是说 容器类似于 实例对象。镜像 是 Docker生命周期 中的 构建或 打包阶段,容器则是启动和 执行阶段。

Docker的特性? linux namespace 的作用:

Docker 镜像:

FROM ubuntu:14.04
RUN apt-get update
RUN apt-get install -y nginx
RUN echo 'Hi, I am in your container' > /usr/share/nginx/html/index.html
EXPOSE 80

docker build -t="static_web" ./
docker run -d -p 80 --name static_web static_web nginx -g "daemon off;"
1. 流程:
Dockerfile 指令:
    ENTRYPOINT ["/usr/sbin/nginx"]
    CMD ["-h"]

      docker  run -t -i static_web -g "daemon off;"
    ```
  * WORKDIR: 为后续的指令 执行  设定工作目录, 
  * ENV: 指定环境变量, 在后续的 RUN 中使用,也可以在其他命令中使用环境变量。 该变量 持久的保存到 从我们的镜像创建的任何容器中。 相反 在docker run -e 中传递的环境变量 则一次性有效
  
    ```shell
    ENV RVM_PATH /home/rvm
    RUN gem install unicorn
    // 等同于  RVM_PATH=/home/rvm gem install unicorn

    ENV TARGET_DIR /opt/app
    WORKDIR $TARGET_DIR
    ```
  * VOLUME: 向 从该镜像创建的容器 添加卷。卷是容器中的特殊目录 ,可以跨越文件系统,进行共享,提供持久化功能,有如下特性
    * 卷 可以再 容器间 共享和重用
    * 一个容器 可以不是必须 和 其他容器共享卷
    * 对卷的修改 立即生效
    * 对卷的修改不会影响镜像
    * 卷会一直存在知道没有任何 容器使用它。(标志着 卷 是由 docker管理的,而非容器,也非操作系统)
    * VOLUME ["/opt/project", "/data"] 可以使用数组形式 创建多个挂载点
  * ADD: 将 构建环境下 的文件或 目录  复制到 镜像中。ADD software /opt/application/software; ADD source target 
    * 其中source可以是 文件或者目录 或者url,不能对构建目录之外的文件进行ADD操作。(因为docker只关心 构建环建, 构建环境之外的任何东西 在命令中都是不可用的)
    * target 如果目录不存在的话,则 docker会创建 全路径,新建文件目录的权限 为0755
    * ADD命令会屎之后的命令不能够使用缓存。
    * ADD 会将 归档文件 进行 解压,例如 ADD latest.tar.gz /var/www/wordpress/
  * COPY: 区分于 ADD, copy只做纯粹的复制操作。不会进行解压缩操作.
  * 产出镜像: docker rmi static_web

### 实践:
-v 允许我们将宿主机的目录作为卷,挂在到容器里。-v source:target 

* 构建 Redis 镜像

```shell
FROM ubuntu:14.04
ENV REFRESHED_AT 2020-10-09
RUN apt-get update
RUN apt-get -y install redis-server redis-tools
EXPOSE 6379
ENTRYPOINT ["/usr/bin/redis-server"]
CMD []
docker run  -d --name redis_con  redis

docker run -p 4567 --name webapp --link redis:db -t -i sinatra /bin/bash
// 该命令中 使用--link标志创建了  sinatra 到 redis_conn 的父子链接关系

docker run -d --name redis_con redis
docker run --link redis:db -i -t ubuntu /bin/bash

root@31c4f6ac36a4:/# cat /etc/hosts
127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
ff00::0	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters
172.17.0.3	db 949baab68dd4 redis_con
172.17.0.4	31c4f6ac36a4


root@31c4f6ac36a4:/# env

DB_PORT_6379_TCP_ADDR=172.17.0.3
DB_PORT_6379_TCP=tcp://172.17.0.3:6379
DB_PORT=tcp://172.17.0.3:6379
....
require 'uri

uri = URI.parse(ENV['DB_PORT'])
redis = Redis.new(:host => uri.host, :port => uri.port)

实践: 通过 Jekyll Apache 来构建 自动构建一个博客网站

FROM ubuntu:14.04
ENV refreshed_at 2020-10-10

RUN apt-get update
RUN apt-get install -y ruby ruby-dev make nodejs
RUN gem install --no-rdoc --no-ri jekyll

VOLUME /data
VOLUME /var/www/html

WORKDIR /data

ENTRYPOINT [ "jekyll", "build", "--destination=/var/www/html" ]

docker build -t jekyll ./
FROM ubuntu:14.04
ENV REFRESHED_AT 2020-10-10

RUN apt-get update
RUN apt-get install -y apache2

VOLUME [ "/var/www/html" ]
WORKDIR /var/www/html

ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/logapache2
ENV APACHE_PID_FILE /var/run/apache2.pid
ENV APACHE_RUN_DIR /var/run/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2

RUN mkdir -p $apache_run_dir $apache_lock_dir $apache_log_dir
EXPOSE 80

ENTRYPOINT [ "/usr/sbin/apache2" ]
CMD ["-D", "foreground"]

docker build -t apache ./

docker run -v /home/james/blog:/data/ –name jekyll_con jekyll
docker run -d -P –volumns-from jekyll_con –name apache_conn apache
// 这里使用了 标志 –volumes-from 标志 将指定容器 中的所有卷 添加到 新创建 的容器中。意味着容器 apache_conn 可以访问 容器 jekyll_conn 中的所有卷,即: 可以访问 jekyll_conn 产生的博客文件 目录 /var/www/html 中的内容。
// 卷 只有在没有容器 使用的时候才会被清理,也就是说 在 删除 docker rm jekyll_conn 之后 /var/www/html 中的内容就不复存在了 (这里面是否需要 同时删除 apache_conn 才可以? 因为apache_conn 依然在使用,把持 该卷. 可以进行实验验证)

docker run –rm –volumnes-from jekyll_conn -v $(pwd):/backup ubuntu tar cvf /backup/blog.tar /var/www/html
创建一个 docker 容器,将 共享的 /var/www/html 卷,进行打包 到 外部目录中。

不使用 ssh 管理 Docker 容器

  PID=$(docker inspect --format {{.State.Pid}} 949baab68dd4)
  nsenter --target 16870 --mount --uts --ipc --net --pid
  nsenter --target 16870 --mount --uts --ipc --net --pid ls

对 Docker 容器的编排,或者是非常重要的一步. K8s

? 实际中遇到的一些问题:

ssh: ssh 经常使用, ssh ip “command” 可以用来在远端 ip 上执行command, 但是 当command存在复杂的 command 比如 for 循环,则总是不能够正确执行, 比如 sh: 2: Syntax error: word unexpected (expecting “do”), 在进行了一番查找之后, https://stackoverflow.com/questions/26325685/execute-for-loop-over-ssh 发现自己忽略到了 在 command 中的 $ 会,会在 shell 传递给 $ip 机器 之前,就会展开, 这将导致 远端$ip 接收到的 command并非 我们传递给 ssh的command。两种解决办法: