Getting multiple /64 prefixes from ATT NVG599

2016 December 6
by Daniel Lakeland

So, it turns out that the NVG599 doesn't seem to hand out big prefixes like /60 or /61 but it will hand out several individual /64 prefixes.

So, now I have full native speed ipv6, with up to 15 prefixes for internal networks (the 16th is for the WAN side network that the ATT router "owns")

Example config in wide-dhcpv6, if you have WAN on eth0 and LAN on eth1 and you want eth1.10 vlan to have its own prefix.

The line "ia-pd 0" and "ia-pd 1" define two requests for prefixes


# Default dhpc6c configuration: it assumes the address is autoconfigured using
# router advertisements.

profile default

  request domain-name-servers;
  request domain-name;

  script "/etc/wide-dhcpv6/dhcp6c-script";

interface eth0 {
    send rapid-commit;

    send ia-na 0;
    send ia-pd 0; ## request our main prefix
    send ia-pd 1; ## request a second prefix

id-assoc na 0 {
## puts an address on the wan side eth0

## define the prefix we want for our main prefix
id-assoc pd 0 {
    prefix ::/64 infinity;

    # Internal interface (LAN)
    prefix-interface eth1 {
        sla-len 0;
        sla-id 1;
        ifid 1;

## define the prefix we want for our second one
id-assoc pd 1 {
    prefix ::/64 infinity;

    # Internal interface (LAN)
    prefix-interface eth1.10 {
        sla-len 0;
        sla-id 1;
        ifid 1;


Getting an IPv6 /60 Prefix with ATT 6rd tunnels

2016 December 1
by Daniel Lakeland

EDIT: although the below system WORKS, it gave me ABYSMAL performance. On my gigabit connection I was getting ~500Mbps over ipv4 and around 20Mbps over the ipv6 tunnel set up this way (test your ipv4/6 speeds at Comcast's Speed Test which tests both types of connections). Going back to getting my ipv6 from ATT's router gave me full speed ipv6. Evidently there's some traffic shaping on the ATT side that doesn't apply to my Arris router. DON'T set up the following unless you need more subnets more than you need full speed.

So, the router supplied by ATT was an Arris NVG599, it has 6rd set up by default. ATT is set up with its own 6rd /28 prefix such that by appending your DHCPv4 address you can get a /60 prefix. The Arris router supplied will delegate exactly one of the 16 /64 prefixes to your machine via a DHCPv6 request, which you can get wide-dhcp-client to do for you. This of course is fine if you just want a single /64 but what if you want something like a guest wifi VLAN with its own ipv6 prefix? You have a /60 available to you, but how to make it work?

First off, go to the firewall settings on the Arris box and under ip passthrough mode turn on passthrough with DHCPS-fixed, and choose your Linux router box as the machine to receive the public IP.

Now, turn OFF the ipv6 services on the Arris under "Home Network". Restart the Arris box and the router so you get fresh DHCPv4 address on your router.

Now, you're running a Debian based system of course 😉 so you'll want to set up a 6rd tunnel, get yourself a /60 prefix, and then manually delegate one of those prefixes to your LAN interface. You can potentially manually delegate additional prefixes to VLANs or other interfaces on your router box as well. Here's how:

Make sure you've installed ipv6calc

apt-get install ipv6calc

In /etc/network/interfaces

auto tun6rd
iface tun6rd inet manual
      up /etc/network/6rdup
      down ip tunnel del tun6rd

Now, you need the script /etc/network/6rdup, mine looks like:

#!/bin/bash -x

PUBLICIP=$(ip addr show $PUBLICIFACE | sed -n -E -e "/(192.168)/! s: *inet ([0-9.]*)/.*:\1:p")

OUR6RD=$(ipv6calc -q --action 6rd_local_prefix --6rd_prefix ${ATT6RDPREF} ${PUBLICIP} | sed -e "s/::/::1/")

OURDELEGATE1=$(echo $OUR6RD | sed -e "s|.::.*|1::1/64|")

MTU=1472 ## it's what the router uses

echo ${PUBLICIP}

echo "IP Tunnel: ${OUR6RD} via tun6rd"

ip tunnel add tun6rd mode sit local ${PUBLICIP} ttl 64
ip tunnel 6rd dev tun6rd 6rd-prefix ${ATT6RDPREF}
ip addr add ${OUR6RD} dev tun6rd
ip link set tun6rd up
ip link set dev tun6rd mtu $MTU
ip route add ::/0 via ::${ATT6RDRELAY} dev tun6rd

ip addr add ${OURDELEGATE1} dev $OURLANIFACE
exit 0 ## do more error checking if you like


Your mileage may vary, and you may need to debug this stuff. In particular, I'm not doing much error checking, and I'm not removing the ipv6 address from the internal interface when the link comes down. Bringing links up and down several times on your router might cause trouble. Either fix that or just do a reboot instead of monkeying with indidivual interfaces (after all, you want to make sure you can restart the thing and get a properly working network).

With this all in place, together with dnsmasq to handle router advertising and do DHCP/DHCPv6 on my local lan, and Firehol to handle the firewall, I get a fully routed ipv6 subnet on my lan with firewall that passes only very limited inbound traffic, and full outbound traffic... with no appreciable change in latency. The 6rd relay is an ATT anycast ipv4 address so it picks out the "closest" ipv4 relay for you to use. In my case "ping6" has a 9-10ms round trip for example.

IPv6 ULA for redundancy

2016 November 30
by Daniel Lakeland

The IPv6 standard has a concept called ULA (Unique Local Address) similar in nature to the or address spaces in IPv4. For IPv6 these are in the address space fc00::/7. They are addresses that are defined only locally within an organization and don't route on the wide internet. In general, it seems like these addresses are a bad idea to use reflexively. But what are some actual use cases?

One that I can think of is to deal with the 6rd addresses typically handed out by consumer ISPs. The way these work is that each ISP has a prefix, and then your router creates a sub-prefix by taking the ISP prefix and appending your dynamic IPv4 address. If the ISP prefix is short enough, you can have a few bits of address space to play with for yourself. A 6rd prefix looks like


Well, as you can probably guess, every time your power goes out for a couple of hours you might lose your IPv4 DHCP lease and now your whole network needs to be renumbered when you come back up with a new ipv6 prefix.

Ideally, when you sign up with your ISP, they'd give you a fixed static IPv6 /56 or even /48 prefix, which you would keep until you decide to move to a different ISP, but instead of that administrative hassle, they've invented a way to use the existing IPv4 DHCP infrastructure, which makes their lives easier, but your life less certain.

Sure, responding to new prefixes is do-able. But also, what about ISP outages? Just because your ISP goes down when a squirrel chews through your connection, doesn't mean you want to lose access to say your printer or scanner or file-share within your home or small business (ok, sure you've got an ipv4 set up anyway... but honestly that won't be forever).

Enter the ULA. You create a random prefix in the ULA space (get one randomly generated here) and then you set this up as an additional prefix for your local network. Now you can provide your laser printer or Samba share or web/security camera with a static local address that doesn't depend on what your ISP (or the squirrels!) had for breakfast, and your other hosts can auto-configure via SLAAC so that they can access these printers/cameras/shares via the ULA prefix.

If you're big enough to have your own assigned IPv6 prefix, then great, you can use those numbers, but if your ISP is someone like ATT who is using 6rd and your IPv6 prefix is inevitably going to depend on some IPv4 DHCP lease that is totally unpredictable, then ULA can give you a predictable redundant local network that always looks the same regardless of what happens on the wide internet. That's a good thing for the majority of us.


QoS Throttling Netflix and YouTube

2016 November 25
by Daniel Lakeland

If you look on the internet there are lots of professional network people, especially at Colleges and Universities, who are trying to limit the bad effects of people streaming movies on their network. Netflix in particular is very popular and takes up to around 3GB/hour of streaming HD video (~ 7 Mbps). Even a regular SD video might take 1 GB/hr (2.2 Mbps). A few hundred students all lounging around between classes could consume a full gigabit/s internet connection.

But, how to do it? If you have a smallish network with a single uplink point, it's not too hard. A combination of dnsmasq, Firehol, and FireQOS on your router together with the Linux ipset functionality can both prioritize the video streams so that they don't stutter, and at the same time limit the total bandwidth so that other activities don't suffer. Netflix, and YouTube with broad content delivery networks, can both easily hit 200 or more Mbps for 3 seconds while buffering up HD videos. That's 3 seconds of drop-out or stuttering on your VOIP call or lock-up in your interactive game. Those high peaks of bandwidth usage can saturate WiFi connections, make interactive games or voice communications break down and generally cause problems. Forcing these streams to buffer for a longer time at lower peak bandwidth means plenty of room for small interactive packets to interleave. Solving this issue is a good way to improve interactivity and voice quality. Here's how:


  • You have a Linux machine as a router, either a commercial dedicated wifi-router with OpenWRT or a small Intel based server, such as a Mini-ITX or even a regular desktop machine with several NICs.
  • You have dnsmasq, Firehol and Fireqos installed, as well as ipset and all associated required packages ("apt-get install firehol fireqos dnsmasq ipset" on Debian).
  • You have configured your network so the computers using it get their ipv4 addresses from dnsmasq via DHCP, and also use dnsmasq as the local nameserver, so all the machines on your network ask dnsmasq on your router when they need names resolved.

There is a feature of dnsmasq which will add the looked-up IP addresses to an ipset if the name comes from a particular domain. To use this feature we'll create two ipsets (one for IPv6 one for IPv4) to hold addresses from the domains (from which YouTube gets its video content) and (where Netflix serves its video), add the following lines to dnsmasq.conf


To create the ipsets we'll use Firehol, and then we'll use Firehol to mark packets adding something like this to the config before the first interface definition:

ipset4 create videostream4 hash:ip
ipset6 create videostream6 hash:ip

## kill off the DSCP mark inbound, don't trust others to classify our packets

#mark my voice packets high priority

#mark video packets medium priority using AF41
dscp6 class AF41 PREROUTING src6 ipset:videostream6
dscp4 class AF41 PREROUTING src4 ipset:videostream4

We've marked packets from these video streaming sources with dscp mark AF41 (decimal value 34). This is the highest "assured forwarding" class, with low drop probability. It's recommended for use with Live Video by Cisco. My assumption here is that your spouse will not like it if video stutters thanks to your messing around on the router. The voice packets get an even higher DSCP=48 priority which ensures WiFi WMM puts them in the VOICE queue.

Finally, we'll use FireQOS to give these packets a moderate high priority but limit them to some total bandwidth. Typically maybe 4 HD streams would be enough for a single family, so around 27 Mbps would be reasonable, with some overhead required, maybe 33 Mbps would make sense.

Note, both Netflix and YouTube are modern infrastructure with plenty of IPv6 connectivity, so you will need to use an "interface46" declaration in FireQOS to prioritize BOTH types of IP packets. I have a bonded interface "bond0" that is my LAN output. In FireQOS I have QOS on the LAN side output:

interface46 bond0 lanout output rate 2500Mbit qdisc fq_codel minrate 1mbit
    class voicertp commit 1mbit ceil 10mbit
        match4 udp src MY_ASTERISK_BOX
    class video commit 1mbit ceil 33mbit
        match dscp AF41

In addition to this causing routed video stream traffic to be limited to 33mbit it also puts a dscp AF41 mark on the packet which causes it to have reasonably high priority over WiFi (the VI queue intended for video use) under the WMM QoS system in 802.11n and later, so downstream on your network it will also be treated as an important but not top-priority packet.

That's it! Using "fireqos status lanout" you can watch as packets go through the video class at maximum 33mbps instead of the default class at 250mbps or whatever, leaving you with plenty of spare bandwidth for interactivity over a typical WiFi connection.


Typical Nerd Network?

2016 November 20
by Daniel Lakeland

So, I'm not sure how typical this is for people who actually use computers for work and such, but this is more or less how my home network looks. With my main workstation taking files off the home server via NFS, it seemed like a good idea to split the server and printer off from the Buffalo WiFi "router" (no longer routing, just a fancy AP) and the ATA so that all the calls will be going over a separate cable from any filesystem operations.

Once gigabit fiber WAN hit, I needed to move the routing load from the little Buffalo router to the home server, which is when I added a "Smart Managed" switch. Doing that also lets me prioritize voice packets via DSCP on the local LAN so if I'm doing something like talking on the phone while browsing PDF scans of documents, the traffic to my computer reading images off the server doesn't interfere with my calls.

The home server runs on an Asrock Rack J1900D2Y which is a mini-itx motherboard with a dedicated IPMI port (shown in red). The switch also lets me prioritize IPMI traffic (slightly lower than the voice traffic but higher than default traffic) so that if something goes wrong and I need to reboot the server I can connect without lag. The server is headless and has no keyboard.

I discovered that the IPMI port on these (and also on Supermicro mobos) can failover to be shared with the first ethernet port on the motherboard. This causes no end of trouble when the IPMI traffic starts appearing on one of the links in the bonded LAG group. It seemed like the solution would be to force the IPMI to only use the dedicated port, but I haven't yet figured out how to make that actually happen (altering it in the IPMI network settings didn't seem to work, I think I need to get into the BIOS but that requires a reboot of the server while using IPMI and then I have been losing my IPMI connection and can't see the BIOS screen because it tries to failover during boot). A reasonable solution was to statically allocate the IPMI MAC address to the port where the dedicated IPMI link goes into the managed switch. At least then the switch doesn't start getting confused about the LAG group, and the IPMI port eventually figures out that it can't talk to anyone on its failover port so it sticks to Dedicated.

Yellow links show battery-backed UPS power, so I can still have internet access and phone calls for several tens of minutes to an hour after power goes out. That duration was much longer when the whole thing was routed by the Buffalo device, but it's still a reasonable amount of time since the little Mini-ITX server, switch, and RAID enclosure all use about 8% of the max UPS output, or something like 75 watts.

I'm pretty sure this system could handle a full office for a medium business of around 100 people with just the addition of a couple of POE switches and some desk phones (and a whole lot more floor space!). For a system like that I'd probably move the file server function to a cluster of two servers running glusterfs to handle hardware failures and planned downtime more smoothly.

As it is right now the whole thing works pretty smoothly and gives me a place to store large files and archive lots of photos without having my important files connected directly to my desktop machine where I run more bleeding edge kernels and occasionally run large computations that have a chance of accidentally filling all the RAM and bringing the machine to its knees (there was some particularly nasty issue trying to plot some of my recent graphs. I think ggplot was making a full copy of a large data frame once for each plot in a spaghetti plot, leading to 65GB RAM usage... moving to data.tables helped a lot, and also re-thinking how the plot was constructed so that it was using just one copy of the table).

How much effort goes into your home network? Considering how many WiFi networks I see where no-one even bothered to set the ESSID to something other than ATTWiFi-992 or whatever, I'm guessing it varies a lot. One thing that seems clear though is that lots of people are frustrated with their home networks as they load on more and more devices. Even a non-tech-savvy family of 2 adults and 2 teens probably has a minimum of 10-15 WiFi devices these days given smartphones, tablets, laptops, a security camera or baby monitor, a game console, Roku/FireTV/Chromecast etc.

The fact that this technology works as well as it does is testament to how inefficiently our radio spectrum is being used. Think of the whole FM Radio, TV, and spectrum in use for business, police, fire, and soforth. Something like 50MHz to 1GHz is in my opinion utterly wasted compared to what could be done with modern techniques like Frequency Hopping Spread Spectrum or dynamic frequency allocations via negotiation protocols (compare to the way DHCP works for IP addresses).


For Dave, the QoS update

2016 November 19
by Daniel Lakeland

I've been using Fireqos for my home network. Since switching recently to Gigabit fiber it required a lot of reconfiguring of my internal network. In the process I discovered a few things:

  1. Typical consumer level routers from even a few years ago can't even begin to handle a gigabit through their firewall. You need something with an x86 type processor or a very modern ARM based consumer router. My Buffalo router could push about 150Mbps through the firewall at most.
  2. QoS is still important at gigabit speeds. You can push a lot of data into buffers very quickly. Furthermore keeping things well paced actually allows you to go faster because acks make it back to where they're going.
  3. Don't forget the effect of crappy cables. Replace your patch cables that you have lying around that came with whatever stuff you used to have with something good. I made my own patch cables with a crimp tool and high quality Cat5e, and it improved packet loss issues that may have been an issue before as well.
  4. As Dave Taht suggested, switching from pfifo to fq_codel helped for the ssh connection class. In particular, I had been thinking of this class as mainly handling keystrokes and things for ssh sessions, but of course scp and rsync both like to push data over ssh. Because of that, I needed to put an fq_codel qdisc on the ssh class so my keystrokes would make it even when some rsync was going.
  5. Too many things have changed at once for me to know whether fq_codel would have any affect on my voip RTP queue. But I suspect not. Every 0.02 seconds it'll send a single udp packet for each call. Each packet is around 1000 bytes. There are typically 1-4 calls at most. They jump to the front of the line due to the QoS and so the queue is never going to have more than 1 or 2 packets in it. The overhead of fq_codel makes no sense when the queue never gets longer than 3 packets and never takes longer than .00002 seconds to drain. If I have any issues though, I'll revisit.


VOIP update: gigabit fiber cures most of what ails you

2016 November 16
by Daniel Lakeland

Ok, so I had a couple of weeks of continually degrading call quality and then realized that ATT had just started offering fiber service in my neighborhood, so I jumped on that like a gymnast on a trampoline and dropped my cable connection like a ton of bricks. After a lot of fooling around with getting the install done and turning off the cable service and then re-configuring my internal network to handle the now ridiculously fast network connection... I can assure you that if you have crappy audio quality over a cable modem it is most likely your ISP's jittery lossy junk connection and no amount of QOS will make their part of it any better.

My impression is that VOIP technology works really hard to mask the shittiness of shitty internet. I use opus codec for example using CSipSimple and it's a super high quality codec with forward error correction and blablabla. So if your internet jitters and packets arrive late or not at all, it will cover up that loss up to several percent packet loss in such a way that it just sounds like you're muffled (it can reconstruct the lower-frequency components even with packet losses). Well, it turns out that by the end of my cable modem experience, sure enough I was experiencing something like 2% packet loss and jitter that would have packet arrival times be between something like 20 ms normally, and 100ms every second or two. It seemed to coincide with time of day, and I suspect a poor/corroded/temperature sensitive splitter connection somewhere along the coax together with the shared nature of the cable in my neighborhood. Monitoring incoming traffic during a call using "fireqos status wan-in" would show steady 100 kbps incoming for several seconds (when using ulaw) and then 15-30 kbps for a second where a bunch of packets would drop out... and it would do this randomly on average around 5 second intervals. No amount of QOS on my end could solve this issue.

Also, testing this issue was hard. Things like dlsreports speed test try hard but if the ISP is prioritizing icmp/ping packets it can give a very misleading view of packet loss and jitter for the UDP/RTP packets that make up your voice.

Now, the FIBER connection is a whole different story. It includes an IPv6 prefix I can delegate, and I now have a fully routable public IPv6 network with no extra lag/delay. In fact, it showed up a bunch of issues in my network that weren't there at the lower speeds. I moved the routing load from a consumer Buffalo wifi router to my home server which has a 4 core Celeron J1900 processor, 16 Gigs of RAM and 3 bonded NICs. It provides an NFS server, an SMB server, a web server, dnsmasq, and FireHOL/FireQOS traffic control. Pushing packets through the QOS and firewall doesn't even hit 1% CPU usage on this machine, whereas the Buffalo router was maxing out its CPU at 150-200Mbps. So, now I consistently get over 800 Mbps symmetric internet connection. The best part is that packet loss is nonexistent, the fiber transceiver has a battery backup, the jitter is around 1 or 2 ms under load, and I can dedicate 10Mbps to voice calls without even noticing it as roundoff-error.

Still, the WiFi part can be tricky especially with neighbors interfering, but I can assure you gigabit fiber is the way to go 😉

Yes, I still run QOS on the gigabit, because regardless of how big your pipe is, someone will come along and open up a big video buffering task and suck up the bandwidth if you let them. Prioritizing your voice and your ssh keystrokes ahead of all else still makes sense.

Onward to internet the way it was meant to be.


Bayesian model of differential gene expression

2016 November 5
by Daniel Lakeland

My wife has been working on a project where she's looking at 3 time-points during a healing process and doing RNA-seq. I'm analyzing the known secreted proteins for differential expression, using a fully Bayesian model in Stan:

Red genes have 90% posterior probability of up or down regulation relative to week 0 controls

Red genes have 90% posterior probability of up or down regulation relative to week 0 controls, at least according to the current overnight Stan run.

Fun, but non-trivial. There are around 6000 genes being considered, and there are 5 or 6 of these parameter vectors... 36000 parameters to be estimated from 60000 data points, and the parameters all have fairly long tails... cauchy priors and soforth. Then I need a couple hundred samples. so Stan is outputting something like 5-10 million numbers. Warmup has been a problem here, and the long tails have required re-parameterizations and tuning. Fun stuff.


More VOIP over WLAN: With low bitrates disabled, decrease the beacon interval for more responsive AP switching

2016 November 4
by Daniel Lakeland

It seems that beacons from your APs are how your wireless client (say a smartphone) determines which access point to connect to. Beacons obviously take up time that would otherwise be used for transmitting data, so that suggests reducing them in order to make more time available for transmission of useful stuff, but, with your low bitrate modes turned off, beacons will be transmitted at 6000 kbps or 12000 kbps and hence be pretty short compared to their duration if you were allowing 1000 kbps modes. So this suggests actually increasing their frequency so that it is quicker to switch between stations.

Typical settings are something like (in hostapd on OpenWRT)

    option beacon_int '100'

These counts are in 1.024 ms intervals, which which suggests that every 102.4 ms a beacon will be transmitted. Round off, around 100ms between beacons. So, if you're running around your house at 10mph you will travel about 1.5ft in 100ms which suggests your beacons could occur between longer intervals. But, it takes a while to re-associate with a new AP. Typical VOIP packets are sent at 20ms intervals, and this suggests that if you're dissociated from an AP for say 2 beacons, you're going to drop 10 packets, which is a noticeable audio delay.

Instead, let's drop the beacon interval to 50 and then if you're dissociated for 2 beacon intervals that's 100ms total and you drop about 5 packets, an amount that might be covered by typical jitter buffers in hard/soft phones.

I just literally ran around my house while talking to a test number that I have set up, the test number records for 30 seconds and then plays back the recording... with beacon=250 I definitely had little drop-outs in audio, with beacon=50 even running around my house to change position as fast as possible, no noticeable audio dropouts occurred.



Disabling WiFi low bitrates for more stable performance

2016 October 31
by Daniel Lakeland

Next step in my FIX MY PHONE campaign is dealing with the WiFi part. Step one was to DSCP mark my inbound RTP packets with DSCP=48 so that linux will send them over the highest priority "VO" WMM queue (WiFi Multi Media, a simple priority based QoS system for the radio). The standard for voice is DSCP=46 but linux drivers use the slightly lower priority VI (video) queue for DSCP=46, and hence I'd be competing with potentially other traffic, especially traffic to my FireTV Stick.

Second step was to disable lower bitrates. 802.11b came out in something like 1999 every piece of kit in my house supports 802.11n which came out in ~ 2007 almost 10 years ago. There is really no excuse for using 1Mbps rates except that they work at long distances. Now that I have 3 access points in my house, long range is actually a detriment, and forcing clients to switch between access points as they move around is beneficial (and, I like to pace around while talking on the phone... people would complain about that as my phone probably lowered its data rate to stay associated rather than jump to the closer AP and keep a higher rate).

Nevertheless I noticed that in fact lots of sleeping mobile devices will connect at 1Mbps while they sit there waiting to refresh their email connections, etc. At 1Mbps sending a single packet of 1kB of data takes 8 ms. If I'm in the middle of a voice conversation, 8ms is close to half the inter-frame duration of 20ms. Let's not forget that beacons happen something like every 100ms (I set mine to 250ms). Beacons get transmitted at the lowest data rate in the basic set. So, for efficiency and channel access purposes, it makes sense to limit everyone to using higher rates, including sleeping mobile devices.

In OpenWRT the /etc/config/wireless file can be used to force you to accept only much higher data rates. I set my minimum to 12 Mbps. So now a 250 byte RTP packet takes on the order of 0.16 milliseconds to send, (or less if we're at even higher rates).

Here's a stanza defining the radio on my router

config wifi-device 'radio0'
    option type 'mac80211'
    option path 'pci0000:00/0000:00:11.0'
    list ht_capab 'SHORT-GI-40'
    list ht_capab 'TX-STBC'
    list ht_capab 'RX-STBC1'
    list ht_capab 'DSSS_CCK-40'
    option country 'US'
    option htmode 'HT20'
    option hwmode '11g'
    option txpower '14'
    option channel '1'
    option frag '2200'
    option rts '1000'
    option beacon_int '250'
    option basic_rate '12000 18000 24000 36000 48000 54000'
    option supported_rates '12000 18000 24000 36000 48000 54000'

The last 2 lines determines the basic rates required and allowed by this access point. This is sent in the beacons 4 times a second (beacon_int 250). I'm also turning on rts/cts for packets over 1000 bytes and frag for packets over 2200. I may turn those off, they were attempts to improve reliability given both a lot of neighbor interference (hence fragmenting big packets so they xmit more reliably) and a lot of low-power mobile devices hence turning on rts/cts so the AP will ask mobile devices to pay attention before transmitting to them, hence devices that can't hear each other don't collide with each other as much.

It makes sense to me that all this should really help. In particular, less time spent futzing around at 1Mbps serving low priority mobile devices refreshing their email server connections, or sending beacons saying "here I am... you can talk to me at 1Mbps if you like" means more clear-air for getting regularly spaced 20ms packets over the airways.

This stuff isn't rocket science, but it's not wiring up electrical outlets either. Understanding the physical limitations of radio spectrum use and making good choices about the way in which multiple connected devices cooperate should take something that "kind of worked" and turn it into "rock solid", I hope.