Gigabit Ethernet is at the edge of what you could use a hub for, because at 1Gbit/s, consider:
Transmitter A won't realize the collision has happened until the jamming signal from B travels 50 meters back to A. The beginning of B's transmission, followed by the collision-alert jamming signal, will arrive at A right as A is transmitting the last few bits within its collision window. Some stations may receive the jamming signal sooner and hear less of A's transmission. But so long as we promise 50 meters will be the maximum distance between any two nodes, no matter in what topology hubs and repeaters are arranged, no node should expect to hear a collision after it's heard 64 bytes of valid signal. This rule also holds if a third party, like a hub, is responsible for sending the jamming signal.
For 10Mbit/s Ethernet, ignoring repeater-induced delays, the maximum diameter---say the maximum length of a single half-duplex FOIRL segment---is 2.8km. This is rather theoretical, becuase there's no good reason not to run a single unrepeated FOIRL segment full-duplex (and full-duplex means no collisions, no collision window, no network diameter limit). You would only need to run it half-duplex if the FOIRL segment is being repeated onto some other kind of segment, and in that case the repeater's delay makes the maximum allowed FOIRL length shorter.
For half-duplex 100BaseFX, the maximum length is 200 - 400m, though I'm not sure what are the various official limits and rules. I think 200m is safe even with repeaters.
The delay introduced by repeaters and PHYs brings the practical value I found on the Interweb down from speed-of-light-theoretical 50 meters to about 20 meters for half-duplex 1Gbit/s Ethernet with a 64-byte collision window. For 10Gbit/s Ethernet, now we're down to two meters.
There actually is a half-duplex 1000BaseT standard. 802.3z uses 512-byte collision windows (and thus a 512-byte minimum packet size) to work around this problem. By making the collision window eight times bigger, they make the network diameter about the same as it was with 100BaseTX, but such a large collision window makes the cable become almost all noise when there is any contention, compared to 100BaseTX. Getting good cable utilization depends on the ``carrier sense'' characteristic---normal data packets need to be fairly long compared to the collision window. The large minimum packet size also has the odd effect that, for a unidirectional TCP flow, the stream of TCP ACKs will consume on the wire about 1/6 the raw bits of the stream they're acknowledging, but considering that ACKs will now collide with the data a lot more often it's really much worse than 1/6th. I've never heard of anyone actually using half-duplex 1000BaseT, or seen a 1000BaseT hub for sale anywhere. Clearly, for high-speed moderate-distance networks, we need something that's full-duplex, and that means some kind of switch.
With full duplex, there are no collisions and no collision window, so the 70km links of 1550nm/1000BaseZX don't run into the problems we're discussing here. However, the 140km of fiber you need to make that round trip will hold about 50 kBytes of in-flight data (or 500kBytes at 10Gbit/s), so you'd better have TCP windows at least that large if you want to fill the pipe with a single TCP circuit. :)
There's another sneaky way around this problem used by celfones and cable Internet service.
802.11 does attempt some timeslicing nonsense, and it has a base station that copies signals, but it does not use this single-receiver trick on a separate uplink band, so the network diameter over which a single AP can efficiently serve will always be quite small even if one were to break regulations and jack the power way up. This ``time-division duplex'' model means the uplink transmissions need to be padded with lengthening periods of pre- and post-silence as the network diameter increases, to insure that every station can copy the AP.
One can trick-up this TDD model with a variety of optimizations, most of them fairly intuitive. It's a common practice in large-diameter 802.11 setups to give the access point an omni antenna, and all the stations get highly-directional antennas that reject signals received from other stations. I don't know if that buys you anything in practice. It's imagineable that the AP could adjust this pre- and post-gap individually for each station based on their physical locations. This would be particularly valuable if the AP is handing out CIR slots for VoIP-over-802.11 phones sending tiny packets, because the uplink slots for every active phone could happen gaplessly, followed by the TDD turnaround gap, followed by a gapless burst of tiny downlink packets from the AP to each active phone. This TDD-gap-customization trick is maybe not a simple one to pull off with mobile stations in three-dimensional space. Another useful and easy optimization would be to make sure the AP always has a CDMA code orthogonal to that of any station---this is eerily similar to separate uplink/downlink, but it's not the same because stations still can't receive and transmit concurrently. I don't know, are they already that clever?
Some of these radio tricks might be useful for designing extremely high-speed busses on printed circuit boards, but I think these are tending towards full-duplex, point-to-point topologies (PCIe, RAMBUS, HyperTransport), just like Ethernet.