From: Mark Bartelt Date: Wed, 19 May 2004 16:43:40 -0700 (PDT) To: sysadmins@cacr.caltech.edu Subject: stupid libwrap [ Just a heads-up for anybody using (or planning to use) hosts.allow to restrict TCP connections from some set of IPs ... ] Sigh ... The versions of OpenSSH I had on the HP and SGI systems weren't built with any hosts.{deny|allow} support, so I grabbed the latest version, and rebuilt. I set the default deny rule to ALL:ALL, and started to build hosts.allow with a list of IP addresses I wanted to permit SSH connections from. The message below was my annoyance du jour. Why do people _still_ use fixed length buffers, when the amount of stuff that one needs to stash away is unknowable? Anyway, there's presumably some #define somewhere that determines how big the table is. Gotta rebuild ... --------------------------------------------- Date: Wed, 19 May 2004 16:20:50 -0700 (PDT) From: Mark Bartelt To: Some User Subject: Re: one more thing... sorry >> Is the server down at the moment? I can't ssh from anywhere... OK, try now. It appears that the mechanism which permits one to specify a list of accepted IPs uses a fixed-length table, and one which is annoyingly short. I'll see about fixing this (by recompiling, once I figure out what parameter to tweak up by an order of magnitude or two!) but for now I've fixed things by collapsing several addresses on the same subnet into a single range, thereby shortening the list. ============================================================ ============================================================ From: Mark Bartelt Date: Thu, 20 May 2004 16:05:24 -0700 (PDT) To: sysadmins@cacr.caltech.edu Subject: Re: stupid libwrap Didn't take long to find the culprit; a quick grep through the tcpd source code found stuff like ... char sv_list[BUFLEN]; or char buf[BUFLEN]; ... in hosts_access.c and tcpdchk.c, with the value of BUFLEN #define'd to be 2048. Seems absurdly short to me, so I'm planning to bump it by a couple orders of magnitude, and then rebuild. Sigh ... ============================================================ ============================================================ From: Mark Bartelt Date: Thu, 20 May 2004 16:36:22 -0700 (PDT) To: sysadmins@cacr.caltech.edu Subject: Re: stupid libwrap >> Just curious: what's wrong with this, as long as when >> sv_list or buf are filled there is a check on the >> length? Well, there _is_ a check that the buffer fills, which is how I discovered this misfeature in the first place. My point is that, if you need to keep the entire contents of some entity in memory, and you don't know in advance what upper bound (if any) there is on the amount of stuff you need to hold, then reading into a fixed-length buffer is truly shoddy programming. The fact that the program checks that the buffer is full, but that a full buffer (== a file too big for the buffer to hold) causes the program to _fail_ means that there is something very wrong. Aside from situations which really aren't controllable (e.g. running out of virtual memory or disk space), programs should be able to cope with as much input data as they're presented with, and not malfunction just because some buffer is too short. And even in those uncontrollable situations, they should fail in the least annoying manner. One might instead, for example, do something like stat() the file to find its size; malloc() that much memory; read the file into that buffer if it didn't all fit, it must be because somebody altered the file between the stat() and the read(), so free() the memory, go back to the stat() and try again and keep trying until you _can_ read it all) do your stuff free() the malloc()'ed memory One could do it a whole lot of other ways, many of them a lot more elegant. But the above would at least work. But to assume that, sure, XYZ bytes is _plenty_ big and thus use XYZ as the buffer size, is a programming style that should have disappeared back in the 60s ...