Discussion:
Mandrake 9.1 and ServeRAID 5i
(too old to reply)
Raf Schietekat
2003-09-14 16:06:31 UTC
Permalink
Let me drop this in your lap. It's an extremely serious problem for
users of a specific RAID system (you know, the people who are paranoid
that anything should go wrong with their data or the availability of
their server): it crashes the server and messes up that data. Evidence
suggests (also look at that bug report mentioned below!) that SANE may
be the guilty party. I will just reproduce the last message I sent to
some parties involved (with [...] used to omit some irrelevant text and
to avoid divulging some identities), which may be rather verbose, but
you never *really* know what's exactly relevant and/or convincing enough
and/or interesting.
Note: ***Urgent***: If you (Mandrake and maybe IBM) would like to have
me perform specific tests on my system, perhaps with Mandrake 9.2 RC1,
it will almost have to be this week, because next week I'd like to bring
the server into production. Please use this opportunity!
No reaction, BTW, and now it's too late (I probably should have come
here before, but I did not know what SANE was, and then my message was
blocked for a while before I sent it again), or it would have to be to
help with a very targeted, convincing, and quick intervention (I would
have to invest time in a complete reinstallation, which I am obviously
reluctant to do). My workaround will be a cron(-like?) task that will
disable anything related to scanners every minute or so (the frequency
of the existing msec security check), to protect against accidental
updates that reinstate the code.
Brief description: Mandrake 9.1 crashes systems with ServeRAID.
Extensive report below, including a reference to a previous bug report,
currently marked as needing further information (well, here is the info).
[...]
For [...], whom I've
included in cc, a resume, in case you want to step in: I've been
test-running an IBM xSeries 235 with ServeRAID 5i for several weeks,
with Mandrake 9.1 (probably still the most recent version). Yesterday,
I inserted two 3COM NICs in bus B, which also carries the ServeRAID 5i
card. To test that the latter was still independently running at full
100 MHz speed as in the documentation and not dragged down to the
NICs' 33 MHz, I did "time tar cf - / | wc -l", which showed about 7.5
MB/s throughput as before (unless it was more like 10 MB/s before, I'm
not exactly sure). I then used drakconf to see whether the NICs were
identified correctly. I did this from a remote ssh -X session, which
froze up. I could not open another ssh connection. On the console
itself, the mouse pointer was still moving, but I could not type
anything into the logon screen. The bottom two drives were spinning
continuously, while the top one wasn't doing anything, this for a RAID
5 setting involving all three drives. Since nothing seemed to work, I
did a reset (small button, I hope I shouldn't have used, e.g., the
power button instead). During reboot, the file system proved to be
corrupted, and could not be repaired (I will have to find out how to
do that, or reinstall everything).
After some further research using www.google.com for ["Mandrake 9.1"
ServeRAID], which at first didn't seem necessary because I had
repeatedly and successfully done all these steps before and the only
new thing were the two NICs on bus B (the same bus that carries the
ServeRAID 5i card), it appears that I may have been bitten by what's
http://qa.mandrakesoft.com/show_bug.cgi?id=3421
(this is where I saw Thierry Vignaud's address; I've found [...]'s
address in /usr/sbin/scannerdrake on a Mandrake 9.0 installation)
- /dev/sda8 had disappeared, although its neighbours were still there,
- I tried MAKEDEV, but this uses /usr/bin/perl, on /dev/sda7, which was
not yet mounted,
- I did "mount /usr",
- I did ./MAKEDEV,
- I rebooted, and things seemed fine.
Then I wanted to try a few things to see whether I could pinpoint the
problem. Here is a complete account of what I did, probably erring on
the side of giving too much information, but in the hope that it will be
helpful for you to fix Mandrake's configuration managers etc. (I suggest
that a probe for ServeRAID precedes and disables a probe for a scanner,
perhaps with user input, unless the scanner probe can be changed so that
it does no damage to the ServeRAID controller card configuration).
The system now only has a(n extra) NIC on bus A, which is separate from
bus B which also carries the ServeRAID controller card. If I do "#
scannerdrake" from a remote ssh -X session (I like to work from my
laptop; the server is in a little server room), the system wants to
install some packages, but I refuse to cooperate. It then says that it
is scanning, or something (gone too fast for me to be able to read), and
then it says "IBM SERVERAID is not in the scanner database, configure it
manually?" (an obvious sign that something is going wrong with the
scanner probe). I repond No. It then says "IBM 32P0042a S320 1" is not
in the scanner database, configure it manually?". Don't even know what
that is. I respond No. Then it does the same for "IBM SERVERAID" again,
I respond No. And the same again for the other one, I respond No. Then I
- title: Scannerdrake
- text: There are no scanners found which are available on your system.
- button: Search for new scanners
- button: Add a scanner manually
- button: Scanner sharing
- button: Quit
I persevered, and clicked "Search for new scanners", well, that's the
same as before, from just after the scanning. No crash yet. I did Quit.
Then I did vi `which harddrake2`, and I tried to add the line that
[...] suggested (next if $Ident =~ "SCANNER";), but then vi froze
(perhaps some of the file was still in memory from a previous vi
session, but then it wanted to access the disk?). The other ssh sessions
continued to work, unlike during the previous failure; I tried man perl
in another one to try and see an explanation for double quotes ([...])
Input/output error", repeatedly. I can still open other ssh sessions,
and the console itself works, but I see that all 3 drives have an amber
status light (not the green activity light, and if I remember correctly
the status light is normally off), and that the "System-error LED" is
lit on the "Operator information panel" (only other lit signs are
"Power-on LED" and "POST-complete LED"), with also one LED lit in the
"diagnostic LED panel" inside the computer, next to a symbol of a disk
and the letters "DASD". When I look next, the console has gone from
graphics to text mode, and is filling with messages about "EXT3-fs
unable to get inode block". Meanwhile, the remote ssh sessions are still
responsive. I don't try anything on the console, and use a remote ssh
session to try "# shutdown -h now" as root, but obviously the command
cannot be read from disk (error message "bash: shutdown: command not
found"). ctl-alt-del on the console's keyboard: same thing (this causes
init to (try to) invoke shutdown). I then did a reset (actually a power
cycle; just a reset would have been better). The three drives were still
marked defunct (status lights on). I used the ServeRAID support CD to
boot, and could set two of the physical drives online, but the last one
did not have that right-click menu option (I even set the second one
defunct again, was able to bring the third online, but then the option
was missing on the second one). So then I briefly removed the second
drive from its hot-swap bay, and when I inserted it again it started
getting rebuilt from the other drives, and (according to the log)
completed a little over an hour later (for 30+ GB disk capacity, of
which maybe less than 1 GB in use, if that matters). I tell ServeRAID
Manager (?) to reboot, and then I'm stuck with a garbled Mandrake splash
Boot: linux-secure
Loading linux-secure
Error 0x00
Boot: linux-securere
ctl-alt-del works (but brings no salvation).
Was data lost during the reset/power cycle (hopefully not during the
rebuild, because that would defeat the purpose of having a RAID), or as
early as the corruption of the ServeRAID controller card that
(ultimately?) set the drives to defunct state? Apparently the boot
doesn't even get to the stage where it would decide about clean state of
the file systems, so this is not something we can afford on a system in
production (evidence that recovery is not a simple matter and may
involve data recovery from backup, unless *perhaps* if a boot floppy
takes the system past this stage, after which ext2/ext3 gets a chance to
repair itself, but I have not boot floppy... (will make one now, though,
next chance I get)).
I reboot into diagnostics (PC DOCTOR 2.0, apparently a specific feature
of the IBM server), and the SCSI/RAID Controller test category passed.
Next I will proceeded to reinstall the whole system from scratch.
I'm not sure yet, though (why hasn't this happened before, and has a
conclusion been reached?), that's why I've also cc'ed [...].
It seems strange however, if this is indeed the
problem, that a hardware adapter card should prove so vulnerable to a
probing method used for a different device (a scanner), but then again
I have no close knowledge of these issues.
BTW, the machine is not yet in production (I was going to do that, but
I guess I can now wait a few days), and available for tests.
I still think it's really unfortunate that there is no list of known
*in*compatibilities, because who would suspect, with ServeRAID
support, or drivers anyway, available for SuSE, TurboLinux, Caldera
(SCO Group, the enemy!), and RedHat, that Mandrake would pose a
problem? The same goes for Mandrake's site, of course (all of IBM is
just "known hardware", and xSeries 235 and ServeRAID 5i are just absent).
http://www-1.ibm.com/servers/enable/site/xinfo/linux/servraid
http://www.mandrakelinux.com/en/hardware.php3
[...]
Raf Schietekat <***@ieee.org>
abel deuring
2003-09-14 21:11:48 UTC
Permalink
Post by Raf Schietekat
Let me drop this in your lap. It's an extremely serious problem for
users of a specific RAID system (you know, the people who are paranoid
that anything should go wrong with their data or the availability of
their server): it crashes the server and messes up that data. Evidence
suggests (also look at that bug report mentioned below!) that SANE may
be the guilty party. I will just reproduce the last message I sent to
some parties involved (with [...] used to omit some irrelevant text and
to avoid divulging some identities), which may be rather verbose, but
you never *really* know what's exactly relevant and/or convincing enough
and/or interesting.
your report sounds indeed quite nasty...
Post by Raf Schietekat
No reaction, BTW, and now it's too late (I probably should have come
here before, but I did not know what SANE was, and then my message was
blocked for a while before I sent it again), or it would have to be to
help with a very targeted, convincing, and quick intervention (I would
have to invest time in a complete reinstallation, which I am obviously
reluctant to do). My workaround will be a cron(-like?) task that will
disable anything related to scanners every minute or so (the frequency
of the existing msec security check), to protect against accidental
updates that reinstate the code.
well, unless you have indeed a scanner installed on your server. there
is no point to have Sane-related programs installed (scanimage,
xscanimage, saned and sane-find-scanner come to mind -- I can't comment
on any Mandrake-specific stuff, because I have nver used or installed
Mandrake).
Post by Raf Schietekat
After some further research using www.google.com for ["Mandrake 9.1"
ServeRAID], which at first didn't seem necessary because I had
repeatedly and successfully done all these steps before and the only
new thing were the two NICs on bus B (the same bus that carries the
ServeRAID 5i card), it appears that I may have been bitten by what's
http://qa.mandrakesoft.com/show_bug.cgi?id=3421
abel deuring
2003-09-14 21:24:14 UTC
Permalink
Post by abel deuring
Post by Raf Schietekat
Let me drop this in your lap. It's an extremely serious problem for
users of a specific RAID system (you know, the people who are paranoid
that anything should go wrong with their data or the availability of
their server): it crashes the server and messes up that data. Evidence
suggests (also look at that bug report mentioned below!) that SANE may
be the guilty party. I will just reproduce the last message I sent to
some parties involved (with [...] used to omit some irrelevant text
and to avoid divulging some identities), which may be rather verbose,
but you never *really* know what's exactly relevant and/or convincing
enough and/or interesting.
your report sounds indeed quite nasty...
Ouch. I meant of course the problem, not your report being nasty...

sorry
Abel
Raf Schietekat
2003-09-15 12:40:00 UTC
Permalink
Post by abel deuring
[...]
your report sounds indeed quite nasty...
As you later correct, the problem. :-)
Post by abel deuring
[...]
well, unless you have indeed a scanner installed on your server. there
is no point to have Sane-related programs installed (scanimage,
xscanimage, saned and sane-find-scanner come to mind -- I can't
comment on any Mandrake-specific stuff, because I have nver used or
installed Mandrake).
I presume you mean there is no need. The point is probably ease of use
(not to be dismissed out of hand), assuming no accidents occur.
Post by abel deuring
[...]
As already mentioned, I have never worked mit Mandrake, so I can't
make any comment on scannerdrake based on real knowledge of this
program. But I assume that it tries to identify scanners either by
calling the standard Sane programs sane-find-scanner or scanimage, or
it uses
Comment #6 by Thierry says 'basically harddrake2 uses scannerdrake that
uses "LC_ALL=C sane-find-scanner -q"', which led me to come here.
Post by abel deuring
[...]
Since scanimage may load many backends, and since I haven't read the
source code of every Sane backend, I am not 100%, but "only" 99% sure,
that these backend will not try to work any longer with the processor
devices belonging to the RAID controller. It is highly unlikely that
"IBM YGHv3 S2" is mentioned as the vendor and/or device IDs anywhere
in a Sane backend. Hence it seems that your Raid controller does not
like INQUIRY commands sent too often -- which would be in violation of
the SCSI standard. SCSI devices should be able to respond to a few
commands like INQUIRY and TEST UNIT READY under any circumstances. And
especially these two commands should not alter the state of SCSI
device in any way.
So is it basically the card's fault? Seems rather a silly defect... I
wonder what IBM would say about that.
Post by abel deuring
[...]
Do you see any messages from the Linux driver of the RAID controller? If
I have no more information than this, and no real desire to provoke
another failure unless I know it will be worthwhile.
Post by abel deuring
the controller or the driver becomes confused, file system errors are
unavoidable, I think.
Hmm...
Post by abel deuring
[...]
I think it is highly unlikely that a Sane program or backend or this
special Mandrake "scanner search and installation" program is to blame
for your problem. If you need your server up and running quite soon, I'd
Can you confirm that all that sane-find-scanner does is query the card,
with only requests that must be safe according to the SCSI standard?
Post by abel deuring
recommend to use another RAID controller. (sorry, I don't have
positive hint for a certain model...)
That's not a very attractive option. I have run a full diagnostic, but
that was IBM's own. Is there another diagnostic that will prove the
hardware is the guilty part, without provoking the response "Mandrake is
not supported"? Is SANE in Red Hat (which is supported by IBM)?
Post by abel deuring
If you want to dig a bit deeper into the problem, you may try to run
this Mandrake scanner installation program with the environment
variable SANE_DEBUG_SANEI_SCSI set to 255. This will produce quite
much debug output (which should probably be sent to an IDE hard disk
on the server or to your notebook, because the file systems on the
RAID array will probably break again). The most interesting things are
the lines like
rb>> rcv: id=0 blen=96 dur=10ms sgat=0 op=0x12
"op=..." is the SCSI command code sent to a device. 0x12 is INQUIRY;
0x00 is TEST UNIT READY; these two commands should not cause any harm
to a decent SCSI device. If you see anything else, we may have found a
bug in Sane.
Of course, this test will only make sense, if the Mandrake software
either calls sane-find-scanner or scanimage, or if it uses the
sanei_scsi library.
I wish I had the time, or a spare test system. Maybe...
Thanks for the reply,
abel deuring
2003-09-15 13:32:50 UTC
Permalink
As already mentioned, I have never worked mit Mandrake, so I can't make
any comment on scannerdrake based on real knowledge of this program. But
I assume that it tries to identify scanners either by calling the
standard Sane programs sane-find-scanner or scanimage, or it uses
Comment #6 by Thierry says 'basically harddrake2 uses scannerdrake that
uses "LC_ALL=C sane-find-scanner -q"', which led me to come here.
Ok. Obviously i did not not read this web page very carefully...
[...]
Since scanimage may load many backends, and since I haven't read the
source code of every Sane backend, I am not 100%, but "only" 99% sure,
that these backend will not try to work any longer with the processor
devices belonging to the RAID controller. It is highly unlikely that
"IBM YGHv3 S2" is mentioned as the vendor and/or device IDs anywhere in
a Sane backend. Hence it seems that your Raid controller does not like
INQUIRY commands sent too often -- which would be in violation of the
SCSI standard. SCSI devices should be able to respond to a few commands
like INQUIRY and TEST UNIT READY under any circumstances. And especially
these two commands should not alter the state of SCSI device in any way.
So is it basically the card's fault? Seems rather a silly defect... I
wonder what IBM would say about that.
Let's ask them ;)
[...]
Do you see any messages from the Linux driver of the RAID controller? If
I have no more information than this, and no real desire to provoke
another failure unless I know it will be worthwhile.
the controller or the driver becomes confused, file system errors are
unavoidable, I think.
Hmm...
[...]
I think it is highly unlikely that a Sane program or backend or this
special Mandrake "scanner search and installation" program is to blame
for your problem. If you need your server up and running quite soon, I'd
Can you confirm that all that sane-find-scanner does is query the card,
with only requests that must be safe according to the SCSI standard?
"grep sanei_scsi_cmd sane-find-scanner.c" shows that sanei_scsi_cmd is
called only two times; in both cases an INQUIRY command is sent to the
device. And INQUIRY commands are supposed to be "safe".
Henning Meier-Geinitz
2003-09-16 13:32:19 UTC
Permalink
Hi,
Post by abel deuring
Can you confirm that all that sane-find-scanner does is query the card,
with only requests that must be safe according to the SCSI standard?
"grep sanei_scsi_cmd sane-find-scanner.c" shows that sanei_scsi_cmd is
called only two times; in both cases an INQUIRY command is sent to the
device.
If only sane-find-scanner is used (not scanimage) there are only these
two commands sent to each existing SCSI device (/dev/sg*). If both
/dev/sga and /dev/sg0 exits, the device may be querried twice but not
more. I would be sirprised if that's really the cause of the problem.

If you (the original author) used a very old version of SANE it might
be possible that a bad link destroys data, e.g. "/dev/scanner ->
/dev/sda". I have never actually tried if that really "works". But
checking if there is a link /dev/scanner shouldn't harm.

I have never heard of anyone else having problems with sane-find-scanner
and hard discs or disc arrays.

Bye,
Henning
abel deuring
2003-09-16 14:12:43 UTC
Permalink
Post by Henning Meier-Geinitz
Hi,
Post by abel deuring
Can you confirm that all that sane-find-scanner does is query the card,
with only requests that must be safe according to the SCSI standard?
"grep sanei_scsi_cmd sane-find-scanner.c" shows that sanei_scsi_cmd is
called only two times; in both cases an INQUIRY command is sent to the
device.
If only sane-find-scanner is used (not scanimage) there are only these
two commands sent to each existing SCSI device (/dev/sg*). If both
/dev/sga and /dev/sg0 exits, the device may be querried twice but not
more. I would be sirprised if that's really the cause of the problem.
Yes, this sounds weird, but sane-find-scanner does not do more than to
send two INQUIRY commands to a SCSI device, which should not do any
harm.
Post by Henning Meier-Geinitz
If you (the original author) used a very old version of SANE it might
be possible that a bad link destroys data, e.g. "/dev/scanner ->
/dev/sda". I have never actually tried if that really "works". But
checking if there is a link /dev/scanner shouldn't harm.
Well, some RAID controllers have themselves an SG device file for
"maintenance purposes": to configure, which disks belong to an array, to
replace a disk in an array and whatever else you can do with a RAID
contoller. And the ServeRaid seems to be such a controller, so there is
a need for SG device file nodes under /dev. While this also means that
the virtual RAID disks will have both an SG and an SD device file, it
should not cause any risk to issue two INQUIRY commands to their SG
device file, if they behave well...
Post by Henning Meier-Geinitz
I have never heard of anyone else having problems with sane-find-scanner
and hard discs or disc arrays.
I neither, but Raf mentioned in his first mail a Mandrake support web
page, whichshows that he is not the only one who gets broken ServeRaid
RAID arrays, when harddrake tries to search for scanners.

Abel
Raf Schietekat
2003-09-17 09:56:17 UTC
Permalink
Post by abel deuring
[...]
Well, some RAID controllers have themselves an SG device file for
"maintenance purposes": to configure, which disks belong to an array, to
replace a disk in an array and whatever else you can do with a RAID
contoller. And the ServeRaid seems to be such a controller, so there is
a need for SG device file nodes under /dev. While this also means that
the virtual RAID disks will have both an SG and an SD device file, it
should not cause any risk to issue two INQUIRY commands to their SG
device file, if they behave well...
Let's see, with the test you suggested (IBM says they won't support
Mandrake, so...):
# export SANE_DEBUG_SANEI_SCSI=255
# sane-find-scanner -v 2>&1 | tee /mnt/floppy/log-v
No failure, even after a few repetitions. In a different shell (I admit
I don't immediately know how to get rid of SANE_DEBUG_SANEI_SCSI otherwise):
# sane-find-scanner -q 2>&1 | tee /mnt/floppy/log-q
# while true; do sane-find-scanner -q > /dev/null; done
No failure. Hmm, strange, nothing on the floppy, and the files are still
in /mnt/floppy when looking from the shell (even though the floppy is no
longer in the drive). Guess I'll still have to configure that, somehow
(luckily it was not required). Well, anyway, this is what's in log-q:
found SCSI processor "IBM SERVERAID 1.00" at /dev/sg1
found SCSI processor "IBM 32P0042a S320 1 1" at /dev/sg2
found SCSI processor "IBM SERVERAID 1.00" at /dev/sgb
found SCSI processor "IBM 32P0042a S320 1 1" at /dev/sgc
My questions to you:
- Isn't sane-find-scanner supposed to only find scanners? Shouldn't it
return nothing, and isn't this the first stone falling in a domino effect?
- Maybe the rest of scannerdrake etc. invokes other SANE programs for
these devices, that will upset the RAID card? Does that sound plausible?
I'll go have a look anyway.
Post by abel deuring
[...]
Raf Schietekat <***@ieee.org>
abel deuring
2003-09-17 11:12:02 UTC
Permalink
Post by Raf Schietekat
Post by abel deuring
[...]
Well, some RAID controllers have themselves an SG device file for
"maintenance purposes": to configure, which disks belong to an array, to
replace a disk in an array and whatever else you can do with a RAID
contoller. And the ServeRaid seems to be such a controller, so there is
a need for SG device file nodes under /dev. While this also means that
the virtual RAID disks will have both an SG and an SD device file, it
should not cause any risk to issue two INQUIRY commands to their SG
device file, if they behave well...
Let's see, with the test you suggested (IBM says they won't support
# export SANE_DEBUG_SANEI_SCSI=255
# sane-find-scanner -v 2>&1 | tee /mnt/floppy/log-v
No failure, even after a few repetitions. In a different shell (I admit
# sane-find-scanner -q 2>&1 | tee /mnt/floppy/log-q
# while true; do sane-find-scanner -q > /dev/null; done
No failure. Hmm, strange, nothing on the floppy, and the files are still
So it seems that neither the RAID controller nor its Linux driver nor
sane-find-scanner is buggy.
Post by Raf Schietekat
in /mnt/floppy when looking from the shell (even though the floppy is no
longer in the drive). Guess I'll still have to configure that, somehow
found SCSI processor "IBM SERVERAID 1.00" at /dev/sg1
found SCSI processor "IBM 32P0042a S320 1 1" at /dev/sg2
found SCSI processor "IBM SERVERAID 1.00" at /dev/sgb
found SCSI processor "IBM 32P0042a S320 1 1" at /dev/sgc
- Isn't sane-find-scanner supposed to only find scanners? Shouldn't it
return nothing, and isn't this the first stone falling in a domino effect?
You're right, in general ;) The problem are some HP scanners. (I think,
models like the scanjet 2, 3 and 4.) These scanners don't use the SCSI
commands defined for scanners (like "set scan window" or "start scan").
Instead they have have their own command language, called HP-SCL or
similar. Hence they can't claim to be "regular SCSI scanners", but
return "I'm a SCSI processor" in the INQUIRY data.

This device type is used by all those SCSI devices which aren't
"ordinary" SCSI devices like hard disks, tape drives, CDROMs etc.
Another example for a SCSI processor device it the "robot part" of a
tape changer.

sane-find-scanner lists all processor devices, just to be sure that no
device is missed, which is *possibly* a scanner. It is up to the user or
the calling program to decide, if the device called "IBM SERVERAID 1.00"
is indeed a scanner or something else -- in this case the "management
part" of a RAID controller.
Post by Raf Schietekat
- Maybe the rest of scannerdrake etc. invokes other SANE programs for
these devices, that will upset the RAID card? Does that sound plausible?
I'll go have a look anyway.
Yes, scannerdrake might indeed call a program like scanimage. scanimage
is mainly a command line scan program, but it can also be used to list
all scanners that are connected to a host and supported by a Sane
backend.

If you run something like "strace scannerdrake", you should be able to
see, which programs are invoked by scannerdrake -- but that will produce
tons of debug output: I'm not sure, if the stuff will fit onto a floppy
disk...

Abel
Raf Schietekat
2003-09-17 14:16:58 UTC
Permalink
Post by abel deuring
[...]
So it seems that neither the RAID controller nor its Linux driver nor
sane-find-scanner is buggy.
[...]
It's ridiculous. Maybe I should become a psychiatrist, because people
are more predictable. I've now done "perl -d `which scannerdrake`", and
no failure occurred. Then I tried without -d. Then I tried harddrake2,
and drakconf, several times. No failure. The only strange thing that
remains is that, across reboots (I'm just trying anything at this
stage), the list of Scanner devices in harddrake2 (part of drakconf) is
either the 4 mentioned before, or just the two called SERVERAID, which
does seem rather suspicious, but probably no immediate cause for alarm.
Yet, before the reinstallation (could that be it?), one failure occurred
right after using harddrake2, one was seemingly provoked by
scannerdrake, and then there's also the bug report I found.

Let's look one last time. drakconf>Hardware>Hardware List. A window
"Please wait/Detection in progress" appears. It seems to take a *very*
long time. I go check the server. Two of three drives have their
"defunct" indicator LEDs lit...

But that's enough for today. Next thing is probably installation of Red
Hat, or Turbo Linux, or SuSE, because IBM won't support Mandrake. If I
find out anything else that's relevant here, I'll report it. Meanwhile,
thanks for your efforts to help.

Raf Schietekat <***@ieee.org>
abel deuring
2003-09-17 18:20:12 UTC
Permalink
Post by Raf Schietekat
Post by abel deuring
[...]
So it seems that neither the RAID controller nor its Linux driver nor
sane-find-scanner is buggy.
[...]
It's ridiculous. Maybe I should become a psychiatrist, because people
are more predictable.
;)
Post by Raf Schietekat
I've now done "perl -d `which scannerdrake`", and
no failure occurred. Then I tried without -d. Then I tried harddrake2,
and drakconf, several times. No failure. The only strange thing that
remains is that, across reboots (I'm just trying anything at this
stage), the list of Scanner devices in harddrake2 (part of drakconf) is
either the 4 mentioned before, or just the two called SERVERAID, which
does seem rather suspicious, but probably no immediate cause for alarm.
Yet, before the reinstallation (could that be it?), one failure occurred
right after using harddrake2, one was seemingly provoked by
scannerdrake, and then there's also the bug report I found.
Let's look one last time. drakconf>Hardware>Hardware List. A window
"Please wait/Detection in progress" appears. It seems to take a *very*
long time. I go check the server. Two of three drives have their
"defunct" indicator LEDs lit...
But that's enough for today. Next thing is probably installation of Red
Hat, or Turbo Linux, or SuSE, because IBM won't support Mandrake.
Yes, tracing the bug could take quite some time, and I presume that you
need to get your server installed and running. While I would really like
to know, why and how the RAID array is messed up, this job must probably
be left to Mandrake and/or IBM -- most people don't have a RAID
controller and SCSI disks lying around in a junk box.

Abel

Loading...