The Craftmans Shop > New from Old
The Sequel - Oh Blimey I bought a CNC Lathe (Beaver TC 20)
awemawson:
Yes a bit of Googling brought me to the same conclusion :thumbup:
Now the debate is on how much to do on board the Arduino and how much to bring back into the PC, but first I have a much simpler task - physically connecting to the 48 way edge connector in a way I can repeatably connect and disconnect things.
I rescued a 90 degree mounting PCB version of the correct socket, and I'd thought to just solder it into a bit of Veroboard for onward wiring, but it's three rows of 16 pins on a 0.1" matrix which rather scuppers that idea. I need a "DIN 41612 48 way breakout board" - I can't see that existing, or if it does King's Ransoms will be involved.
The pins are so flimsy that direct wiring even with sleeving, won't last ten minutes! Maybe the type of perforated board with separate pads might work, but I don't have any in stock ATM !
awemawson:
OK Arduino Mega 2560 V3 ordered and amazingly there ARE DIN41612 prototype boards available at a reasonable cost - RS do them but also on eBay . OK it's twice the width needed but that doesn't matter.
The three rows of pins are accommodated but only the outer (A&C) rows are extended, the remaining B row just having isolated pads but it should work OK I think.
Another bonus - I've found some Arduino 2560 Mega code on Github that has been written to test 62256 ram chips which should prove a time saver - OK I need to tweak it as it is only doing 25% in terms of data width and address length, but the donkey work has been done :thumbup:
mc:
I would have probably went for an Arduino Due to give some extra processing power, but a Mega will do the job.
I was going to suggest a Raspberry Pi, depending on what programming language you prefer, but having just checked, they only have a maximum of 17 GPIO pins.
awemawson:
But the Due is a 3.3 volt system that adds a further layer of complexity as the Ram card is all 5 volt TTL signal levels.
Speed isn't really an issue, it can just churn away counting through all 64K addresses with 64 data variants, though probably I won't, I'll do all ones, all zeros and the old U*U* sequence which gives alternate ones and zeros.
In practice I'm not looking for marginal issues with this RAM, it'll either work or not, but the tester will let me exercise the address lines and data lines which I can then check with a 'scope while it's running !
I suspect that the EPROM cards in this controller use exactly the same interface (except for writing) so hopefully I'll be able to peek inside them as well.
jiihoo:
I've done embedded software at work. And still do. I've seen a memory test - provided in source code format by a chip vendor - that failed to identify broken or incorrectly soldered memory chips.
Two different kinds of memory tests are needed: Ones that exercise the memory byte-by-byte or word-by-word and ones that exercise the address lines.
1) All ones, all zeros and 0x01010101 written one memory address at a time are of the first category (exercise each memory location). So is the classic "walking bit" where you walk first a 1-bit through a memory location (with all other bits being zero) and then continue this from the next memory location until end of memory. Once the whole memory is tested this way you'd follow this up with a walking 0-bit test.
This category of tests proves that each of the data lines works and that each memory location works assuming the address lines worked correctly. And that is an assumption that needs to be verified next.
2) This category of tests verifies that all address lines work correctly and that each memory location really is a separate memory location. Here you need to first fill the whole memory with some non-repeating pattern (or a pattern that does not have a cycle length divisible by 8) and then read it back verifying that each memory address returns what was written there. You could use the fibonacci sequence (1, 1, 2, 3, 5, 8, ...) for this (use unsigned integers for the math; yes it will overflow but that does not matter as there should always be enough variance in the least significant bits that are left after an overflow...) Or you could just write "memory address + 11" to each memory location (the +11 is arbitrary but important as you don't want to use the same data as the memory address...). Anyway, write all memory addresses with your chosen sequence first and then read back all memory addresses and verify that the expected data is there.
Why is the category 2 test needed? Well, assume you have a 16-bit system (64k addresses) that has a solder bridge between the two most significant address line bits. Or a broken memory chip that internally shadows the upper half of its memory on top of the lower half. The first category of tests is not going to catch that. What will be seen is that the lower 32k of memory works correctly and the upper 32k of memory works correctly when tested byte-by-byte or in blocks less than 32k, but once the whole memory is tested at once it will be noticed that the upper 32k is the same as the lower 32k...
Actually if testing byte-by-byte even totally unconnected data lines might pass the test if there was a sufficient amount of stray capacitance on the data lines...
My real-world experience with the memory test that didn't find problems was only doing the category 1 test, I think it was doing the walking bit test. This kind of test is ok if you are looking for random errors in memory that is otherwise working ok, but not ok for finding soldering mistakes in production or badly misbehaving memory chips.
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version