Skip to main content

Hello to all,

Seeking advice from those of you who have had to change out the gauges in their cars. I have a early 90's CMC WB Speedster and plan on changing out the gauges and was wondering how I should attack this project in gaining access to the installed gauges? Of course looking for the path of least resistance. Any advise would be greatly appreciated. (replacement gauges are from a CMC speedster kit that have never been used. Changed out the old incandescent bulbs with LEDS).....Thanks!

Attachments

Images (1)
  • IMG_1003
Original Post

Replies sorted oldest to newest

Before trying to pull the gauges out into the cockpit:

Get a nice blanket and fold it up so it fits in the driver's foot well.

Lie down on your back with your face looking up under the dash and you'll be looking right up at the gauges.

Each gauge is held in place with a square, "U" shaped retainer held onto the back of the gauge with small, knurled, round brass nuts.  Spin those off (using your fingers should be enough) and remove the retainer from each gauge.  Now they will be loose so you can remove them from inside the cockpit.

As Greg mentioned, you should be able to pull the gauges out, but depending on how long the builder made the wires determines how far they can pull out.  You may need someone else to gently try pulling them out one at a time while you lie back under there to see which wire(s) hold the gauge in place and try to give them more slack.  

Eventually, you'll figure out what's going on and then can remove and label each wire, one at a time, to re-connect them to the new gauges.  I don't know if you have CMC "Classic" or "Vintage" gauges or a set of VDO 356 Reproduction gauges, but wiring to all of those should be in the "Resources>Library" section on the top menu bar at the top of this page under one of the CMC/Fiberfab build manuals.  If you can't find them, PM me and I'll send you a set.

Last edited by Gordon Nichols

OK, got the new gauges in....project is not for the faint of heart.  Who ever put these in originally  did not leave any slack to pull the gauge out through the holes (must have been installed during the original fabrication of the car?) Did most of the work upside down on my back. It would have helped if I had a third hand with long teeny tiny fingers. Don't know if I'll keep the after market radio that was installed, I kind of like the look with out it.....will need to think about that one. More to come.....

Attachments

Images (1)
  • IMG_1031

OK, I’m gonna “Geek Out” on you for an instant:

Seymour Cray invented the fastest computer in the world between 1976 - 1982, the Cray 1.  It was a beast, the size of a compact car and built with a vertical, cylindrical backplane “mother board” in the middle and the computer boards mounted outward from the motherboard radially all around the core.  The design minimized wire length between the boards by running them across/along the cylinder in the middle.  I would guess that the cylindrical motherboard was maybe 18”-20” in diameter.  Shorter wires means faster computer speeds and it was 10X faster than it’s closest competitor.

In order to install those hundreds of wires inside the central cylinder/motherboard, Cray found electronic technicians from South Korea who were small enough to fit inside of the cylinder to work.  They were lowered down Inside of the mother board with a wire wrap gun, a bunch of wire and a point-to-point list to connect everything together.   Kinda makes you glad you’re a “big person”, relatively speaking.

Someone gave me one of the later Cray computers when I was setting up a supercomputing center for university researchers. It was boxed in 12 wooden crates each the size of a forklift.

After careful estimations of space, power and cooling requirements, I sold it for spare parts and bought a cluster of small computers with the proceeds. Tech do become obsolete quickly.

@Michael Pickett wrote: "After careful estimations of space, power and cooling requirements, I sold it for spare parts and bought a cluster of small computers with the proceeds. Tech do become obsolete quickly."

IIRC, the Cray 1 ran what's called "Emitter Coupled Logic" integrated circuits - a particularly fast but power-hungry device, so the entire system needed about 115 kilowatts of power (208 3-phase) from a large bank of power supplies (the power supplies were in the base - what looks like a sitting bench all around the column).   That was the easy part - AC Power is easy to get, but it was a lot like "Brute Force Computing".

Cooling the beast was an adventure, especially if what you were running was core CPU intensive like at Los Alamos and the NSA.  The ECL logic could produce enough heat to warm the Sears Tower in Chicago (remember....It was power hungry.....Power in = heat out).  The Cray 1 was a two-story system, similar to IBM Mainframes at the time.  The CPU (in the photo above) was on one floor and the liquid Freon refrigeration system was on the floor below, with all the cooling juice running through custom-made (for each installation) stainless steel piping ( Stan would have loved it).  

IBM, by contrast, often ran their CPUs on one floor, with the storage disks one floor down and the I/O processors one floor up, all to keep interconnecting cables as short as possible and to spread out the cooling demand  - The Cray was no different.  

The Cray cooling system had a lot of leaks in the first 20 systems or so until advanced Stainless Steel alloys and better tube welding techniques were developed (there was a lubricant in the liquid Freon that attacked the stainless steel).  

By 1982, Sun, Silicon Graphics, Digital Equipment Corp., IBM, Thinking Machines and Data General all had 64-bit systems in the works and SUN was demonstrating multiple/parallel server systems that were beginning to approach Cray 1 speeds (and in the case of the competitor Thinking Machines, far exceed Cray)  so the industry was moving towards far less expensive options.  Thinking machines produced a system that was known internally at the company as the "Cray Killer" because it was much faster at a fraction of the cost of a Cray.   Those systems began to be installed in late 1984, I think.

OK, that's enough computer history and thread drift.   Any more requires beer.

Last edited by Gordon Nichols

The CPU (in the photo above) was on one floor and the liquid Freon refrigeration system was on the floor below, with all the cooling juice running through custom-made (for each installation) stainless steel piping ( Stan would have loved it).  

I actually used to work on Liebert units cooling the server rooms for telecom, and more critically in a nearby nuclear plant. We also had some computer chillers we serviced.

I was a cub about the time you were in the prime of your career, Gordon. By the time I got my card, the "computer room" units discharging air through perforated tile in the floor of a sealed room were on their way out. But while they were in their heyday, there was no amount of O/T that was too much. Those were the days of round-the-clock service projects, because every minute the machine was down was costing tens of thousands of dollars. That was a lot of pressure for a 26 y/o guy making $12/hr, but we all went through the grinder doing it.

The equipment we used to work on was huge. 100+ ton chillers and million+ btu/hr boilers were the norm. A typical supermarket would have several 125+ h/p racks running 1000 lbs of gas each. Nobody ever cared what anything cost.

Not so any more. Most "server rooms" are a single rack of a couple of servers making so little heat that a ductless mini-split will handle it.

From a service standpoint, there's less separation between hairy-chested super-techs and Bob's Heating, Cooling, and Coin-Laundry Repair than there ever has been. The work gets lighter every year. The guys doing it 50 years from now (scratch that, 10 years from now) will be replacing tiny little <1 hp throw-away modules with inverters and logic boards that will change every 6 months, making a two year old unit completely obsolete. Every location will have dozens, if not hundreds of them.

Nobody will "fix" anything.

Last edited by Stan Galat

We cooled everything with air coming through the floor.  The tiles immediately below a cabinet were slotted, "increased flow" sections but they still had to support the considerable weight of a cabinet  (much of my stuff was 1 tile by 3 floor tiles ).  I don't remember the exact weight, but it needed the highest weight tiles available - We were pushing a ton on later systems.  Have one of them fall through a raised floor and we had to winch it back up by attaching to the ceiling trusses or bring in a special computer room crane system.  

Our "brain" (you geeks would know it as a storage I/O Processor) was in a center cabinet flanked by two more cabinets, each with 492, 2-1/2" hard-drive disks in it (accessed front and back.  It weighed a lot.   It was also a "Pull Through" system with the fans at the top, rather than a single squirrel-cage fan at the bottom as is common in mini-computers so we relied on a LOT of incoming air (and positive air pressure) coming up through the floor and a guaranteed 40F - 50F temp under the floor (lower the better).  Give us that and we were happy campers.  

200

We were a "fault tolerant" system and any major component in our cabinet (power supplies, disk drives, Director cards - anything, really)  could be swapped out "hot" while the system was running with no data interruption.  I explained that to customers as pulling up beside a Formula 1 race car on the track in the middle of a race, at 200mph, reaching over, removing the driver's helmet, swapping out his brain with a new one and putting the helmet back on without ever losing speed or position.

Somewhere along the way I visited a McDonnell Douglas "Tymenet" Data Center in St. Louis.  It was a three-story building with no windows and about 400 yards by 800 yards (really big).  You entered at ground level on the second floor (the first floor was below grade ) and walked through a tunnel with glass sides where you could look out at the gigantic air ducts feeding cooling air to the building - Each duct was big enough to drive a tractor-trailer through.  The chillers were in separate buildings and each floor was cooled separately.  Storage disks were on the bottom floor, CPUs (servers) in the middle and I/O Processors on the top.  It was a jaw-dropping building.

But the coolest part of the trip was lunch at the nearby airport as we were leaving.  It took us about an hour to eat our lunch and all that time we watched a Harrier Jump-Jet just sitting in one spot about 100 feet over the runway, hovering in place.  Sometimes, probably out of boredom, it would slowly rotate in place 360º without changing altitude - just sitting there.  Pretty cool!

Attachments

Images (1)
  • 200
Last edited by Gordon Nichols

Takes a lot of practice to do it this well.....    

I'll try to curb it in the future.   A little.  From time to time.

Besides, @Michael McKelvey - You're an Architect.  You should love seeing cool buildings.  Like the American Express data center in Arizona that is completely under ground (to cool it easier).  Really cool buildings.

 

Last edited by Gordon Nichols

If you go to Philadelphia and stand at the back (southern end) of the Liberty Bell building (away from the entrance), look across Chestnut street to Independence Hall and up to the second floor on the right you'll be looking at one of the State of Pennsylvania's administrative data centers.  The view out the windows into Independence Park is really cool, but I don't think the computers in there appreciate it.

The same can be said for the State of Maryland's Lottery data center, hidden in an historic building in the heart of Annapolis.

And Stan mentioned how much money some companies lose when their computers go down.  There is a place in lower Manhattan that is THE clearing house for all transactions on Wall Street.  They take an infinitesimally small percentage of the value of the transactions they process daily (in any currency) as their fee for clearing the transactions.  They told me once that they were processing around $30 BILLION per minute.  Needless to say, they are set up in a mirrored, fault-tolerant environment that never goes down.  Even with 911 - They were transferred over to a mirrored data center across the East river in literally seconds and kept running until Pres. Bush halted Wall Street transactions for three days.

Getting back to Larry's problem (remember Larry?  He was swapping his CMC Gauges a while back?) :  Larry had no slack in the wires going to his gauges.  

In a data center it takes a lot of man/woman power and hours to string sometimes hundreds of data cables under the raised floor.  The space beneath the floor is typically 2' - 3' deep and in most centers the cables are tossed down there and pulled to the destination and terminated there.  We did not string those cables - That was the customer's responsibility.

It takes just as much effort to remove those cables if something changes in the center, because you first have to find which cables are affected and then gingerly drag them back out (risking an interruption), so a lot of (most) centers just clip the ends off and leave them there.  You can sometimes see 2' to 3' deep masses of cables down under there from the past 30 years and maybe 15% of what you see is still active, but they're all the same color (usually blue orange or yellow), often unmarked and you can't tell which is which - Just like some CMC Speedsters we've seen!

I bet @DannyP has a few stories like this, too.

Sorry for the drift, Larry.  Very quick typing fingers.....    

Last edited by Gordon Nichols
Post Content
×
×
×
×
Link copied to your clipboard.
×
×