I have an old drive with exiting LVM data. The drive is plugged in as /dev/sdb.

[root@bt ~]# fdisk -l /dev/sdb

Disk /dev/sdb: 300.1 GB, 300069052416 bytes
255 heads, 63 sectors/track, 36481 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x92fc9607

   Device Boot      Start         End      Blocks   Id  System
  /dev/sdb1   *           1          13      104391   83  Linux
  /dev/sdb2              14       36481   292929210   8e  Linux LVM
[root@bt ~]#

Lets check the status of the Logical Volumes:

[root@bt ~]# lvscan -a
  inactive          '/dev/VolGroup00/LogVol00' [278.78 GiB] inherit
  inactive          '/dev/VolGroup00/LogVol01' [576.00 MiB] inherit
  ACTIVE            '/dev/VolGroup/lv_root' [50.00 GiB] inherit
  ACTIVE            '/dev/VolGroup/lv_home' [315.22 GiB] inherit
  ACTIVE            '/dev/VolGroup/lv_swap' [5.88 GiB] inherit
[root@bt ~]#

So the two volumes on sdb are currently inactive. The easiest way to tell if its just the logical volumes are inactive, or if the entire volume group is inactive, is by checking if the volume group directory exists in /dev.

[root@bt ~]# ls -l /dev/VolGroup*
/dev/VolGroup:
total 0
lrwxrwxrwx 1 root root 7 May  9 17:11 lv_home -> ../dm-2
lrwxrwxrwx 1 root root 7 May  9 17:11 lv_root -> ../dm-0
lrwxrwxrwx 1 root root 7 May  9 17:11 lv_swap -> ../dm-1
[root@bt ~]#

As you can see, you only see the ones that are already marked active. So we know the volume group is what is inactive. Lets go ahead and enable it.

[root@bt ~]# vgchange -a y VolGroup00
  2 logical volume(s) in volume group "VolGroup00" now active
[root@bt ~]#

Had the volume group been active, but the logical volumes not been active, then you would use "lvchange -a y ". This is typically the case in a system recovery. Now you can confirm that it is enabled, by again checking in /dev.

[root@bt ~]# ls -l /dev/VolGroup*
/dev/VolGroup:
total 0
lrwxrwxrwx 1 root root 7 May  9 17:11 lv_home -> ../dm-2
lrwxrwxrwx 1 root root 7 May  9 17:11 lv_root -> ../dm-0
lrwxrwxrwx 1 root root 7 May  9 17:11 lv_swap -> ../dm-1

/dev/VolGroup00:
total 0
lrwxrwxrwx 1 root root 7 May 19 11:42 LogVol00 -> ../dm-3
lrwxrwxrwx 1 root root 7 May 19 11:42 LogVol01 -> ../dm-4
[root@bt ~]#

And subsequently, our Logical Volumes are active now too:

[root@bt ~]# lvscan -a
  ACTIVE            '/dev/VolGroup00/LogVol00' [278.78 GiB] inherit
  ACTIVE            '/dev/VolGroup00/LogVol01' [576.00 MiB] inherit
  ACTIVE            '/dev/VolGroup/lv_root' [50.00 GiB] inherit
  ACTIVE            '/dev/VolGroup/lv_home' [315.22 GiB] inherit
  ACTIVE            '/dev/VolGroup/lv_swap' [5.88 GiB] inherit
[root@bt ~]#

LVM: Resizing an LV

In both my current job, and previous job LVM was not a tool we were able to use. In the previous job it was because the auto provisioning system simply would not work well with LVM. After the company went through a merger, we were able to get that added. But it was quite late in the game. In my current job, we use Debian Lenny, which did not provision with LVM. This drives me up a fucking wall. I can not stress enough why this is so important.

In case you have no idea what I am talking about. LVM stands for Logical Volume Manager. Instead of writing your file system to a partition, you put LVM on the partition, then you can chop up the volume, and group it however you want. In doing that you will create Logical Volumes. You then write your file system to the Logical Volume. The major gain here, is being able to resize a file system without the risk of losing it if you misaligned the blocks, file system, etc.

[root@media smb]# lvresize --help
  lvresize: Resize a logical volume

lvresize
        [-A|--autobackup y|n]
        [--alloc AllocationPolicy]
        [-d|--debug]
        [-f|--force]
        [-h|--help]
        [-i|--stripes Stripes [-I|--stripesize StripeSize]]
        {-l|--extents [+|-]LogicalExtentsNumber[%{VG|LV|PVS|FREE|ORIGIN}] |
         -L|--size [+|-]LogicalVolumeSize[bBsSkKmMgGtTpPeE]}
        [-n|--nofsck]
        [--noudevsync]
        [-r|--resizefs]
        [-t|--test]
        [--type VolumeType]
        [-v|--verbose]
        [--version]
        LogicalVolume[Path] [ PhysicalVolumePath... ]

[root@media smb]#

In the dark ages, we needed to shut the box down, boot it up using a live CD, and then make the changes to partition table. Using LVM, you only create one partition. So this is completely moot.

[root@media smb]# fdisk -l  /dev/sda

Disk /dev/sda: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000b93ae

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          64      512000   83  Linux
/dev/sda2              64       13055   104344576   83  Linux
[root@media smb]#

As you can see, we have a boot partition, and a partition that holds LVM.

Now, rule of thumb: -If you are growing a file system, you can do this online. Yes, without even unmounting it. How cool is that? -If you are shrinking a file system, you need to unmount it.

The reason for this is, if you are writing to the file system, and you grow it, you get more room... Who cares? If you are shrinking, and the space you are trying to write to is no longer part of the file system... bad things happen.

It is important to note, that I am using the -r flag to automatically run resize2fs/fsck after the LV is resized. Basically, you need to resize the LV, and ALSO the file system held within the LV. Then just for safety sake, you run fsck. Using -r cuts down the three step process to a single step.

In this example. We have a /home directory we want to shrink by 30G. We want to re-allocate this to /.

Step 1: Unmount, and Shrink /home by 30G.

[root@media smb]# umount /home
[root@media smb]# lvresize -r -L -30G  /dev/mapper/vg_media-lv_home
fsck from util-linux-ng 2.17.2
/dev/mapper/vg_media-lv_home: 20/2990080 files (5.0% non-contiguous), 233719/11954176 blocks
resize2fs 1.41.12 (17-May-2010)
Resizing the filesystem on /dev/mapper/vg_media-lv_home to 4089856 (4k) blocks.
The filesystem on /dev/mapper/vg_media-lv_home is now 4089856 blocks long.

  Reducing logical volume lv_home to 15.60 GiB
  Logical volume lv_home successfully resized
[root@media smb]# mount /home

Now lets confirm we shrank down /home to ~16G.

[root@media smb]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_media-lv_root
                       50G   15G   32G  32% /
tmpfs                 935M     0  935M   0% /dev/shm
/dev/sda1             485M   40M  420M   9% /boot
/dev/mapper/vg_media-lv_home
                       16G  169M   15G   2% /home
[root@media smb]#

Step 2: We don't need to unmount /, because we are growing this partition.

[root@media smb]# lvresize -r -L +30G /dev/mapper/vg_media-lv_root
  Extending logical volume lv_root to 80.00 GiB
  Logical volume lv_root successfully resized
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/mapper/vg_media-lv_root is mounted on /; on-line resizing required
old desc_blocks = 4, new_desc_blocks = 5
Performing an on-line resize of /dev/mapper/vg_media-lv_root to 20971520 (4k) blocks.
The filesystem on /dev/mapper/vg_media-lv_root is now 20971520 blocks long.

[root@media smb]#

Now that we have grown / by 30G. Lets check and make sure df reflects this change.

[root@media smb]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_media-lv_root
                       79G   15G   61G  20% /
tmpfs                 935M     0  935M   0% /dev/shm
/dev/sda1             485M   40M  420M   9% /boot
/dev/mapper/vg_media-lv_home
                       16G  169M   15G   2% /home
[root@media smb]#

As you can see, we successfully grew /, and shrunk /home. We didnt need to turn the server off, and if we had anything running out of /home we simply would have needed to stop it. But anything running on / would have been fine to leave running. Compare this to what you would have needed to do if we weren't using LVM. Good, now I assume you will use LVM going forward.

Other neat shit you can do with LVM: Use LVM Snapshots to create backups eg:MyLVMBackup CLVM (clustered logical volume manager) to cluster file systems between multiple boxen. Use LVM to mirror/stripe file systems to create a pseudo software RAID.

Recently I have been watching my log files a lot more closely. While doing so, I noticed A LOT of interesting things. The first I will mention is, for the love of god just block all of the Chinese IP space. The vast majority of password cracking attempts are from China, and reporting it to them does absolutely nothing.

But the other thing I noticed was this:

Feb 10 22:55:37 core named[18088]: client 83.117.170.114#2478: transfer of 'fazey.org/IN': AXFR started
Feb 10 22:55:37 core named[18088]: client 83.117.170.114#2478: transfer of 'fazey.org/IN': AXFR ended

Wait a minute... Did you just attempt a zone transfer, and my DNS server provided it?

So, what is a zone transfer(AXFR)? Well, when you have a slave DNS server, it will periodically dump your zones and update itself. So there is a feature called a zone transfer. It provides all of the records for a given zone. By default, it is allowed from all, so if you have configured your own bind/named, it is easy to miss. Oddly enough, it is very common to miss.

So how do we do it? You do a dig at the nameserver for the domain, and you append the request AXFR. If successful, the output will look like this:

[root@core log]# dig @ns1.fazey.org fazey.org AXFR

; <<>> DiG 9.6.2-P2-RedHat-9.6.2-4.P2.fc11 <<>> @ns1.fazey.org fazey.org AXFR
; (1 server found)
;; global options: +cmd
fazey.org.              86400   IN      SOA     fazey.org. root.fazey.org. 2012040905 28800 7200 604800 86400
fazey.org.              86400   IN      NS      ns1.fazey.org.
fazey.org.              86400   IN      NS      ns2.fazey.org.
fazey.org.              86400   IN      MX      10 mail.fazey.org.
fazey.org.              86400   IN      A       64.85.161.114
mail.fazey.org.         86400   IN      A       64.85.161.115
[...]
ns1.fazey.org.          86400   IN      A       64.85.161.114
ns2.fazey.org.          86400   IN      A       64.85.161.115
www.fazey.org.          86400   IN      CNAME   fazey.org.
fazey.org.              86400   IN      SOA     fazey.org. root.fazey.org. 2012040905 28800 7200 604800 86400
;; Query time: 5 msec
;; SERVER: 64.85.161.114#53(64.85.161.114)
;; WHEN: Tue Feb 12 18:50:50 2013
;; XFR size: 26 records (messages 1, bytes 619)

[root@core log]#

As you can see, being able to dump my entire zone file would make doing recon a breeze for any attacker.

So how do we fix it? edit your /etc/named.conf, and there is a field called options:

options {
        directory "/var/named";
        dnssec-enable yes;
        dnssec-validation yes;
        dnssec-lookaside . trust-anchor dlv.isc.org.;
        allow-transfer { none;};
        version "[null]";
};
The two options we add are:
  • allow-transfer { none;};
  • version "[null]";

These two directives prevent version requests, and zone transfers. If you have a slave DNS server, where it says none, you would put the IP of your slave. That would allow your slave DNS server to function, but not leave you wide open for zone transfers. I recommend everyone use these options in their global option config.

So, we added our config, and we restarted named. Now, lets take a look at what the request returns:

[root@core log]# dig @ns1.fazey.org fazey.org AXFR

; <<>> DiG 9.6.2-P2-RedHat-9.6.2-4.P2.fc11 <<>> @ns1.fazey.org fazey.org AXFR
; (1 server found)
;; global options: +cmd
; Transfer failed.
[root@core log]#

As you can see, we now have the desired affect of being rejected. Do yourself a favor and get your configuration updated before you notice the foreign IPs attempting zone transfers.

Working in the industry, at one time or another, you will have to transfer files. Im sure it will be in a variety of different ways. For the most part everyone has their favorites for each situation. But I would prefer to have one utility on all servers to handle all of those situations. So, my choice has to be a bad ass.

Lets look at some requirements:
  • support for all of my common protocols
  • easy and logical navigation
  • parallel threads!
  • full command line usage.

lftp has been the only thing i've come across that met my criteria. Let me prove this my giving you some examples:

Example 1: At some point, everyone has had to mirror a directory that was being served by Apache with directory indexing turned on. Something like http://pkgs.repoforge.org/bsc/.

Lets demonstrate lftps versatility.

debian:/tmp/outgoing# lftp
lftp :~> open http://10.100.15.10/log/dists/lenny-20120514/binary-i386/
cd ok, cwd=/log/dists/lenny-20120514/binary-i386
lftp 10.100.15.10:/log/dists/lenny-20120514/binary-i386> mirror
Total: 1 directory, 69 files, 0 symlinks
New: 69 files, 0 symlinks
180374247 bytes transferred in 136 seconds (1.27M/s)
lftp 10.100.15.10:/log/dists/lenny-20120514/binary-i386>

Downloading a Debian image from a local box. But look at the protocol... http://. I'm able to treat a web page like a cli. But it does lack the depth to do anything crazy. As far as I can see, there isnt a way to be like "mirror http://10.100.15.10/log/dists/lenny-20120514/binary-i386/a*"

Other ways to solve this? Yes. I could have done a fancy curl request stripping html using links/lynx, and then wgetting the result.

Example 2:

You need an entire directory copied from your server, to another server(put)... But you only have SSH.

[root@core ~]# ls -l lame
total 300
-rw-r--r-- 1 root root 100002 2013-01-13 05:32 a
-rw-r--r-- 1 root root 100003 2013-01-13 05:32 b
-rw-r--r-- 1 root root 100004 2013-01-13 05:32 c
[root@core ~]#

Now lets go ahead and login over sftp.

[root@core ~]# lftp sftp://root@g1.ragenetworks.com
Password:
lftp root@g1.ragenetworks.com:~> mirror -R --parallel=3 lame
Total: 1 directory, 3 files, 0 symlinks
New: 3 files, 0 symlinks
300009 bytes transferred in 2 seconds (133.2K/s)
lftp root@g1.ragenetworks.com:~>

What I did was reverse(-R) mirror the directory. In other words I put the directory from my server, to the remove box. But I also did this using parallel threads(--parallel=N).

Example 3:

Along with command line usage is, how scriptable is it. There are many times you need to simply back up a directory with a cron job. This time we are going to use ftp, and script our remote commands in a file.

[root@core ~]# cat script-file
open ftp://username:password@fazey.org
mirror -R /root/local /home/james/remote
exit
[root@core ~]#

Now we call lftp with the "-f" flag to give it a script input.

[root@core ~]# lftp -f script-file
at 80527360 (80%) 35.12M/s eta:1s [Sending data]
...
[root@core ~]#

As you can see, lftp is a hell of a tool.

N2N: Super simple VPN

So having a bunch of success with PF_RING, I decided to check out some of ntop.org's other creations. One I came across that I had a use for was N2N. Basically you have a supernode daemon, and you create tunnels to it from your edge nodes. But the setup is about as simple as it can really be.

Pretty much exactly what the manual says

setup your supernode (relay for lack of a better phrase)

::
supernode -l 9939

Then all you need for an edge node is:

::
edge -a 10.10.2.1 -c some_community -k some_key -l <supernode ip>:9939

Next edge node:

::
edge -a 10.10.2.2 -c some_community -k some_key -l <supernode ip>:9939

Then from either node, you should be able to reach the other.

::
[root@core ~]# ping 10.10.2.1 PING 10.10.2.1 (10.10.2.1) 56(84) bytes of data. 64 bytes from 10.10.2.1: icmp_seq=1 ttl=64 time=0.073 ms 64 bytes from 10.10.2.1: icmp_seq=2 ttl=64 time=0.070 ms 64 bytes from 10.10.2.1: icmp_seq=3 ttl=64 time=0.063 ms ^C --- 10.10.2.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2496ms rtt min/avg/max/mdev = 0.063/0.068/0.073/0.010 ms [root@core ~]#

Thats it. Seriously... Now if you want it to persist, you need to make an init script for supernode, and edge. I'm also not a huge fan of it having the key sitting there visible in the process list on the edge servers.

::
[root@core ~]# ps aux | grep edge root 2367 0.0 0.1 3644 724 ? Ss Aug30 0:33 edge -a 10.10.2.1 -c HOME -k superkey -l g1.poop.com:4099 root 22730 0.0 0.1 4200 728 pts/0 S+ 20:04 0:00 grep edge [root@core ~]#

That's kind of blatant to just leave laying around. In this fashion it pretty much screams its key in the process list. So I would use a shell script or something to wrap it, so its a little less obvious.

So this used to happen a lot... but since yum, not so much... Like an impatient admin, I got tired of waiting, so I tried to ctrl+c. What? you aren't going to listen? Okay, so I background it, and then kill it.

[root@core ~]# yum check-update
Loaded plugins: refresh-packagekit
dag                                                                                                                                             | 1.9 kB     00:00
dag/primary_db                                                                                                                                  | 7.1 MB     00:08
fedora/metalink                                                                                                                                 | 2.9 kB     00:00
mod-pagespeed                                                                                                                                   |    0 B     00:30 ...
http://dl.google.com/linux/mod-pagespeed/rpm/stable/i386/repodata/repomd.xml: [Errno 4] Socket Error: timed out
Trying other mirror.
updates/metalink                                                                                                                                | 2.6 kB     00:00
^Z
[1]+  Stopped                 yum check-update
[root@core ~]# killall -9 yum
[1]+  Killed                  yum check-update
[root@core ~]#

So then this leads to a corrupt rpm database. Well, I guess we just need to clean it right? I mean you were just running check-update.

[root@core yum.repos.d]# yum check-update
rpmdb: Thread/process 1673/3078575808 failed: Thread died in Berkeley DB library
error: db4 error(-30974) from dbenv->failchk: DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db3 -  (-30974)
error: cannot open Packages database in /var/lib/rpm
/usr/lib/python2.6/site-packages/yum/config.py:884: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
  raise Errors.YumBaseError("Error: " + e.message)
CRITICAL:yum.main:

Error: rpmdb open failed

[root@core yum.repos.d]# yum clean all
rpmdb: Thread/process 1673/3078575808 failed: Thread died in Berkeley DB library
error: db4 error(-30974) from dbenv->failchk: DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db3 -  (-30974)
error: cannot open Packages database in /var/lib/rpm
/usr/lib/python2.6/site-packages/yum/config.py:884: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
  raise Errors.YumBaseError("Error: " + e.message)
CRITICAL:yum.main:

Error: rpmdb open failed
[root@core yum.repos.d]#

Incorrect! Its a full blown corrupt rpmdb. So then we have to do it the old fashion way. We blow away the rpm database, and rebuild it. But! lets back it up first... just in case...

[root@core yum.repos.d]# cd /var/lib/
[root@core lib]# tar -cf rpm_backup.tar rpm
[root@core lib]#

Now, lets go ahead and rebuild the rpmdb:

[root@core lib]# rm -f /var/lib/rpm/__db.00*
[root@core lib]# rpm --rebuilddb
[root@core lib]#

Assuming everything worked out, you should now have a working rpmdb:

[root@core lib]# yum check-update  | head
 Loaded plugins: refresh-packagekit

atop.i386                                     1.25-1.el5.rf                  dag
awstats.noarch                                7.0-2.el5.rf                   dag
clamav-devel.i386                             0.97.5-2.el5.rf                dag
git.i386                                      1.7.11.1-1.el5.rf              dag
graphviz.i386                                 2.22.0-4.el5.rf                dag
graphviz-devel.i386                           2.22.0-4.el5.rf                dag
lame.i386                                     3.99.5-1.el5.rf                dag
lame-devel.i386                               3.99.5-1.el5.rf                dag
[root@core lib]#

As part of a project I wanted to go ahead and setup port channels on my Cisco 2924s. Apparently on 2924XL's its a bit different.

Agg1(config)#int Fast
Agg1(config)#int FastEthernet 0/23
Agg1(config-if)#port group 1 ?
  distribution  How transmitted frames are distributed among ports
  <cr>

Agg1(config-if)#port group 1 distr ?
  destination  Transmitted frames are distributed by destination address
  source       Transmitted frames are distributed by source address

Agg1(config-if)#port group 1 dist dest?
destination

Agg1(config-if)#port group 1 distr dest
Agg1(config-if)#int FastEthernet 0/16
Agg1(config-if)#port group 1 distr dest
Agg1(config-if)#int FastEthernet 0/22
Agg1(config-if)#port group 2 dist dest
Agg1(config-if)#int FastEthernet 0/15
Agg1(config-if)#port group 2 dist dest
Agg1(config-if)#

now, look at what we have:

Agg1#sh port group
Group  Interface              Transmit Distribution
-----  ---------------------  ---------------------
    1  FastEthernet0/16       destination address
    1  FastEthernet0/23       destination address
    2  FastEthernet0/15       destination address
    2  FastEthernet0/22       destination address
Agg1#

I then logged into both A1 and B1 and did the same thing on FastEthernet 0/23-24.

Now, if we look at show cdp neighbors, it shows our switches connected twice.

Agg1#sh cdp neigh
Capability Codes: R - Router, T - Trans Bridge, B - Source Route Bridge
                  S - Switch, H - Host, I - IGMP, r - Repeater

Device ID        Local Intrfce     Holdtme    Capability  Platform  Port ID
A1               Fas 0/16           166         T S       WS-C2924-XFas 0/23
A1               Fas 0/23           166         T S       WS-C2924-XFas 0/24
B1               Fas 0/15           173         T S       WS-C2924-XFas 0/23
B1               Fas 0/22           173         T S       WS-C2924-XFas 0/24
Agg1#

Unfortunately these switches being a little old, none of the "show lacp" stuff is there. I will need to go pull some cables to make sure this is working.

Quite frequently, if you are running enabling debugging on something, it is going to be a little too verbose. Then you want to turn it off, or change something, but you can't see what you are typing. The solution to this would be Synchronous logging. It essentially just carries over what you typed to the next line after it is done flooding your terminal(or vty) buffer.<br />

Agg1(config)#line vty 0 4
Agg1(config-line)#logging synchronous
Agg1(config-line)#exit
Agg1#debug ethernet-interface
Ethernet network interface debugging is on
Agg1#terminal monitor
Agg1#show
(didnt hit enter after typing show)

I then logged in a second time and shut/un-shut one of my port channel interfaces to cause some chatter. <br />

Agg1(config)#int FastEthernet 0/16
Agg1(config-if)#shut
Agg1(config-if)#no shut

Looking back at my original terminal:

Agg1#show
23:15:14: %LINK-5-CHANGED: Interface FastEthernet0/16, changed state to administratively down
3:15:14: %LINK-5-CHANGED: Interface FastEthernet0/23, changed state to administratively down
23:15:15: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/16, changed state to down
Agg1#show
23:15:15: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/23, changed state to down
Agg1#show
23:15:26: %LINK-3-UPDOWN: Interface FastEthernet0/16, changed state to down
23:15:26: %LINK-3-UPDOWN: Interface FastEthernet0/23, changed state to down
23:15:26: %LINK-3-UPDOWN: Interface FastEthernet0/16, changed state to up
23:15:26: %LINK-3-UPDOWN: Interface FastEthernet0/23, changed state to up
23:15:27: %SYS-3-MSGLOST: 1 messages lost because of queue overflow
Agg1#show
23:15:27: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/23, changed state to up
Agg1#show

Notice how it re-showed my prompt, as well as what I had previously typed. It may seem stupid, but work on these switches long enough, and you will understand why this was a notable feature.

Page 1 / 1