mdadm: failed to RUN_ARRAY

原贴:http://www.issociate.de/board/post/37120/mdadm:_failed_to_RUN_ARRAY.html

mdadm: failed to RUN_ARRAY

One of my RAID 5 stopped last night. I get this when I try to
restart it:
[root [at] pep raid]# mdadm -A /dev/md0
mdadm: device 5 in /dev/md0 has wrong state in superblock, but /dev/hdk1
seems ok
mdadm: failed to RUN_ARRAY /dev/md0: Invalid argument

Is my hdk1 gone. Should I try to replace that?

from dmesg:

Oct 9 10:36:04 pep kernel: md: export_rdev(hdf1)
Oct 9 10:36:04 pep kernel: [events: 0000000b]
Oct 9 10:36:04 pep kernel: md: bind<hdf1,1>
Oct 9 10:36:04 pep kernel: [events: 0000000b]
Oct 9 10:36:04 pep kernel: md: bind<hdg1,2>
Oct 9 10:36:04 pep kernel: [events: 0000000b]
Oct 9 10:36:04 pep kernel: md: bind<hdh1,3>
Oct 9 10:36:04 pep kernel: [events: 00000009]
Oct 9 10:36:04 pep kernel: md: bind<hdi1,4>
Oct 9 10:36:04 pep kernel: [events: 0000000a]
Oct 9 10:36:04 pep kernel: md: bind<hdk1,5>
Oct 9 10:36:04 pep kernel: [events: 0000000b]
Oct 9 10:36:04 pep kernel: md: bind<hde1,6>
Oct 9 10:36:04 pep kernel: md: hde1's event counter: 0000000b
Oct 9 10:36:04 pep kernel: md: hdk1's event counter: 0000000a
Oct 9 10:36:04 pep kernel: md: hdi1's event counter: 00000009
Oct 9 10:36:04 pep kernel: md: hdh1's event counter: 0000000b
Oct 9 10:36:04 pep kernel: md: hdg1's event counter: 0000000b
Oct 9 10:36:04 pep kernel: md: hdf1's event counter: 0000000b
Oct 9 10:36:04 pep kernel: md: superblock update time inconsistency --
using the most recent one
Oct 9 10:36:04 pep kernel: md: freshest: hde1
Oct 9 10:36:04 pep kernel: md: kicking non-fresh hdi1 from array!
Oct 9 10:36:04 pep kernel: md: unbind<hdi1,5>
Oct 9 10:36:04 pep kernel: md: export_rdev(hdi1)
Oct 9 10:36:04 pep kernel: md0: removing former faulty hdi1!
Oct 9 10:36:04 pep kernel: md0: kicking faulty hdk1!
Oct 9 10:36:04 pep kernel: md: unbind<hdk1,4>
Oct 9 10:36:04 pep kernel: md: export_rdev(hdk1)
Oct 9 10:36:04 pep kernel: md: md0: raid array is not clean -- starting
background reconstruction
Oct 9 10:36:04 pep kernel: md0: max total readahead window set to 2560k
Oct 9 10:36:04 pep kernel: md0: 5 data-disks, max readahead per
data-disk: 512k
Oct 9 10:36:04 pep kernel: raid5: device hde1 operational as raid disk 0
Oct 9 10:36:04 pep kernel: raid5: device hdh1 operational as raid disk 3
Oct 9 10:36:04 pep kernel: raid5: device hdg1 operational as raid disk 2
Oct 9 10:36:04 pep kernel: raid5: device hdf1 operational as raid disk 1
Oct 9 10:36:04 pep kernel: raid5: not enough operational devices for
md0 (2/6 failed)
Oct 9 10:36:04 pep kernel: RAID5 conf printout:
Oct 9 10:36:04 pep kernel: --- rd:6 wd:4 fd:2
Oct 9 10:36:04 pep kernel: disk 0, s:0, o:1, n:0 rd:0 us:1 dev:hde1
Oct 9 10:36:04 pep kernel: disk 1, s:0, o:1, n:1 rd:1 us:1 dev:hdf1
Oct 9 10:36:04 pep kernel: disk 2, s:0, o:1, n:2 rd:2 us:1 dev:hdg1
Oct 9 10:36:04 pep kernel: disk 3, s:0, o:1, n:3 rd:3 us:1 dev:hdh1
Oct 9 10:36:04 pep kernel: disk 4, s:0, o:0, n:4 rd:4 us:1 dev:[dev 00:00]
Oct 9 10:36:04 pep kernel: disk 5, s:0, o:0, n:5 rd:5 us:1 dev:[dev 00:00]
Oct 9 10:36:04 pep kernel: raid5: failed to run raid set md0
Oct 9 10:36:04 pep kernel: md: pers->run() failed ...
Oct 9 10:36:04 pep kernel: md: md0 stopped.
Oct 9 10:36:04 pep kernel: md: unbind<hde1,3>
Oct 9 10:36:04 pep kernel: md: export_rdev(hde1)
Oct 9 10:36:04 pep kernel: md: unbind<hdh1,2>
Oct 9 10:36:04 pep kernel: md: export_rdev(hdh1)
Oct 9 10:36:04 pep kernel: md: unbind<hdg1,1>
Oct 9 10:36:04 pep kernel: md: export_rdev(hdg1)
Oct 9 10:36:04 pep kernel: md: unbind<hdf1,0>
Oct 9 10:36:04 pep kernel: md: export_rdev(hdf1)

[root [at] pep raid]# cat /proc/mdstat
Personalities : [raid5]
read_ahead not set
unused devices: <none

Thanks.
-Sharif


--
Sharif Islam


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo [at] vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Sharif Islam [ Sa, 09 Oktober 2004 17:39 ] [ ID #121478 ]

Re: mdadm: failed to RUN_ARRAY

Ok, more on this. I think two of my RAID drives failed. and after I
posted the email below, I did something that I am guessign I shouldn't
have.

mdadm --zero-superblock /dev/hdk1
mdadm --zero-superblock /dev/hdi1

Should I try recreating the array in degrading mode.

mdadm --create /dev/md0 --level=5 /dev/hde1 /dev/hdf1 /dev/hdg1 /dev/hdh1 missing

But I think that only works with one failed device.

-Sharif


Sharif Islam wrote:

> One of my RAID 5 stopped last night. I get this when I try to
> restart it:
> [root [at] pep raid]# mdadm -A /dev/md0
> mdadm: device 5 in /dev/md0 has wrong state in superblock, but
> /dev/hdk1 seems ok
> mdadm: failed to RUN_ARRAY /dev/md0: Invalid argument
>
> Is my hdk1 gone. Should I try to replace that?
>
> from dmesg:
>
> Oct 9 10:36:04 pep kernel: md: export_rdev(hdf1)
> Oct 9 10:36:04 pep kernel: [events: 0000000b]
> Oct 9 10:36:04 pep kernel: md: bind<hdf1,1>
> Oct 9 10:36:04 pep kernel: [events: 0000000b]
> Oct 9 10:36:04 pep kernel: md: bind<hdg1,2>
> Oct 9 10:36:04 pep kernel: [events: 0000000b]
> Oct 9 10:36:04 pep kernel: md: bind<hdh1,3>
> Oct 9 10:36:04 pep kernel: [events: 00000009]
> Oct 9 10:36:04 pep kernel: md: bind<hdi1,4>
> Oct 9 10:36:04 pep kernel: [events: 0000000a]
> Oct 9 10:36:04 pep kernel: md: bind<hdk1,5>
> Oct 9 10:36:04 pep kernel: [events: 0000000b]
> Oct 9 10:36:04 pep kernel: md: bind<hde1,6>
> Oct 9 10:36:04 pep kernel: md: hde1's event counter: 0000000b
> Oct 9 10:36:04 pep kernel: md: hdk1's event counter: 0000000a
> Oct 9 10:36:04 pep kernel: md: hdi1's event counter: 00000009
> Oct 9 10:36:04 pep kernel: md: hdh1's event counter: 0000000b
> Oct 9 10:36:04 pep kernel: md: hdg1's event counter: 0000000b
> Oct 9 10:36:04 pep kernel: md: hdf1's event counter: 0000000b
> Oct 9 10:36:04 pep kernel: md: superblock update time inconsistency
> -- using the most recent one
> Oct 9 10:36:04 pep kernel: md: freshest: hde1
> Oct 9 10:36:04 pep kernel: md: kicking non-fresh hdi1 from array!
> Oct 9 10:36:04 pep kernel: md: unbind<hdi1,5>
> Oct 9 10:36:04 pep kernel: md: export_rdev(hdi1)
> Oct 9 10:36:04 pep kernel: md0: removing former faulty hdi1!
> Oct 9 10:36:04 pep kernel: md0: kicking faulty hdk1!
> Oct 9 10:36:04 pep kernel: md: unbind<hdk1,4>
> Oct 9 10:36:04 pep kernel: md: export_rdev(hdk1)
> Oct 9 10:36:04 pep kernel: md: md0: raid array is not clean --
> starting background reconstruction
> Oct 9 10:36:04 pep kernel: md0: max total readahead window set to 2560k
> Oct 9 10:36:04 pep kernel: md0: 5 data-disks, max readahead per
> data-disk: 512k
> Oct 9 10:36:04 pep kernel: raid5: device hde1 operational as raid disk 0
> Oct 9 10:36:04 pep kernel: raid5: device hdh1 operational as raid disk 3
> Oct 9 10:36:04 pep kernel: raid5: device hdg1 operational as raid disk 2
> Oct 9 10:36:04 pep kernel: raid5: device hdf1 operational as raid disk 1
> Oct 9 10:36:04 pep kernel: raid5: not enough operational devices for
> md0 (2/6 failed)
> Oct 9 10:36:04 pep kernel: RAID5 conf printout:
> Oct 9 10:36:04 pep kernel: --- rd:6 wd:4 fd:2
> Oct 9 10:36:04 pep kernel: disk 0, s:0, o:1, n:0 rd:0 us:1 dev:hde1
> Oct 9 10:36:04 pep kernel: disk 1, s:0, o:1, n:1 rd:1 us:1 dev:hdf1
> Oct 9 10:36:04 pep kernel: disk 2, s:0, o:1, n:2 rd:2 us:1 dev:hdg1
> Oct 9 10:36:04 pep kernel: disk 3, s:0, o:1, n:3 rd:3 us:1 dev:hdh1
> Oct 9 10:36:04 pep kernel: disk 4, s:0, o:0, n:4 rd:4 us:1 dev:[dev
> 00:00]
> Oct 9 10:36:04 pep kernel: disk 5, s:0, o:0, n:5 rd:5 us:1 dev:[dev
> 00:00]
> Oct 9 10:36:04 pep kernel: raid5: failed to run raid set md0
> Oct 9 10:36:04 pep kernel: md: pers->run() failed ...
> Oct 9 10:36:04 pep kernel: md: md0 stopped.
> Oct 9 10:36:04 pep kernel: md: unbind<hde1,3>
> Oct 9 10:36:04 pep kernel: md: export_rdev(hde1)
> Oct 9 10:36:04 pep kernel: md: unbind<hdh1,2>
> Oct 9 10:36:04 pep kernel: md: export_rdev(hdh1)
> Oct 9 10:36:04 pep kernel: md: unbind<hdg1,1>
> Oct 9 10:36:04 pep kernel: md: export_rdev(hdg1)
> Oct 9 10:36:04 pep kernel: md: unbind<hdf1,0>
> Oct 9 10:36:04 pep kernel: md: export_rdev(hdf1)
>
> [root [at] pep raid]# cat /proc/mdstat
> Personalities : [raid5]
> read_ahead not set
> unused devices: <none
>
> Thanks.
> -Sharif
>
>


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo [at] vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Sharif Islam [ Sa, 09 Oktober 2004 18:38 ] [ ID #121479 ]

recreate superblock? (was Re: mdadm: failed to RUN_ARRAY)

Is there any way to regain the superblock? I mistakenly did a
--zero-superblock on /dev/hdi and /dev/hdk
/dev/hdi is one of the bad one, but I think it is not totally gone. So I
was trying to force it.

[root [at] pep root]# mdadm -A /dev/md0 --force /dev/hd[efghi]1
mdadm: no RAID superblock on /dev/hdi1
mdadm: /dev/hdi1 has no superblock - assembly aborted

I would appreciate some hints. Thanks.
-Sharif


Sharif Islam wrote:

> Ok, more on this. I think two of my RAID drives failed. and after I
> posted the email below, I did something that I am guessign I shouldn't
> have.
>
> mdadm --zero-superblock /dev/hdk1
> mdadm --zero-superblock /dev/hdi1
>
> Should I try recreating the array in degrading mode.
>
> mdadm --create /dev/md0 --level=5 /dev/hde1 /dev/hdf1 /dev/hdg1
> /dev/hdh1 missing
>
> But I think that only works with one failed device.
> -Sharif
>
>
> Sharif Islam wrote:
>
>> One of my RAID 5 stopped last night. I get this when I try to
>> restart it:
>> [root [at] pep raid]# mdadm -A /dev/md0
>> mdadm: device 5 in /dev/md0 has wrong state in superblock, but
>> /dev/hdk1 seems ok
>> mdadm: failed to RUN_ARRAY /dev/md0: Invalid argument
>>
>> Is my hdk1 gone. Should I try to replace that?
>>
>> from dmesg:
>>
>> Oct 9 10:36:04 pep kernel: md: export_rdev(hdf1)
>> Oct 9 10:36:04 pep kernel: [events: 0000000b]
>> Oct 9 10:36:04 pep kernel: md: bind<hdf1,1>
>> Oct 9 10:36:04 pep kernel: [events: 0000000b]
>> Oct 9 10:36:04 pep kernel: md: bind<hdg1,2>
>> Oct 9 10:36:04 pep kernel: [events: 0000000b]
>> Oct 9 10:36:04 pep kernel: md: bind<hdh1,3>
>> Oct 9 10:36:04 pep kernel: [events: 00000009]
>> Oct 9 10:36:04 pep kernel: md: bind<hdi1,4>
>> Oct 9 10:36:04 pep kernel: [events: 0000000a]
>> Oct 9 10:36:04 pep kernel: md: bind<hdk1,5>
>> Oct 9 10:36:04 pep kernel: [events: 0000000b]
>> Oct 9 10:36:04 pep kernel: md: bind<hde1,6>
>> Oct 9 10:36:04 pep kernel: md: hde1's event counter: 0000000b
>> Oct 9 10:36:04 pep kernel: md: hdk1's event counter: 0000000a
>> Oct 9 10:36:04 pep kernel: md: hdi1's event counter: 00000009
>> Oct 9 10:36:04 pep kernel: md: hdh1's event counter: 0000000b
>> Oct 9 10:36:04 pep kernel: md: hdg1's event counter: 0000000b
>> Oct 9 10:36:04 pep kernel: md: hdf1's event counter: 0000000b
>> Oct 9 10:36:04 pep kernel: md: superblock update time inconsistency
>> -- using the most recent one
>> Oct 9 10:36:04 pep kernel: md: freshest: hde1
>> Oct 9 10:36:04 pep kernel: md: kicking non-fresh hdi1 from array!
>> Oct 9 10:36:04 pep kernel: md: unbind<hdi1,5>
>> Oct 9 10:36:04 pep kernel: md: export_rdev(hdi1)
>> Oct 9 10:36:04 pep kernel: md0: removing former faulty hdi1!
>> Oct 9 10:36:04 pep kernel: md0: kicking faulty hdk1!
>> Oct 9 10:36:04 pep kernel: md: unbind<hdk1,4>
>> Oct 9 10:36:04 pep kernel: md: export_rdev(hdk1)
>> Oct 9 10:36:04 pep kernel: md: md0: raid array is not clean --
>> starting background reconstruction
>> Oct 9 10:36:04 pep kernel: md0: max total readahead window set to 2560k
>> Oct 9 10:36:04 pep kernel: md0: 5 data-disks, max readahead per
>> data-disk: 512k
>> Oct 9 10:36:04 pep kernel: raid5: device hde1 operational as raid
>> disk 0
>> Oct 9 10:36:04 pep kernel: raid5: device hdh1 operational as raid
>> disk 3
>> Oct 9 10:36:04 pep kernel: raid5: device hdg1 operational as raid
>> disk 2
>> Oct 9 10:36:04 pep kernel: raid5: device hdf1 operational as raid
>> disk 1
>> Oct 9 10:36:04 pep kernel: raid5: not enough operational devices for
>> md0 (2/6 failed)
>> Oct 9 10:36:04 pep kernel: RAID5 conf printout:
>> Oct 9 10:36:04 pep kernel: --- rd:6 wd:4 fd:2
>> Oct 9 10:36:04 pep kernel: disk 0, s:0, o:1, n:0 rd:0 us:1 dev:hde1
>> Oct 9 10:36:04 pep kernel: disk 1, s:0, o:1, n:1 rd:1 us:1 dev:hdf1
>> Oct 9 10:36:04 pep kernel: disk 2, s:0, o:1, n:2 rd:2 us:1 dev:hdg1
>> Oct 9 10:36:04 pep kernel: disk 3, s:0, o:1, n:3 rd:3 us:1 dev:hdh1
>> Oct 9 10:36:04 pep kernel: disk 4, s:0, o:0, n:4 rd:4 us:1 dev:[dev
>> 00:00]
>> Oct 9 10:36:04 pep kernel: disk 5, s:0, o:0, n:5 rd:5 us:1 dev:[dev
>> 00:00]
>> Oct 9 10:36:04 pep kernel: raid5: failed to run raid set md0
>> Oct 9 10:36:04 pep kernel: md: pers->run() failed ...
>> Oct 9 10:36:04 pep kernel: md: md0 stopped.
>> Oct 9 10:36:04 pep kernel: md: unbind<hde1,3>
>> Oct 9 10:36:04 pep kernel: md: export_rdev(hde1)
>> Oct 9 10:36:04 pep kernel: md: unbind<hdh1,2>
>> Oct 9 10:36:04 pep kernel: md: export_rdev(hdh1)
>> Oct 9 10:36:04 pep kernel: md: unbind<hdg1,1>
>> Oct 9 10:36:04 pep kernel: md: export_rdev(hdg1)
>> Oct 9 10:36:04 pep kernel: md: unbind<hdf1,0>
>> Oct 9 10:36:04 pep kernel: md: export_rdev(hdf1)
>>
>> [root [at] pep raid]# cat /proc/mdstat
>> Personalities : [raid5]
>> read_ahead not set
>> unused devices: <none
>>
>> Thanks.
>> -Sharif
>>
>>
>
>

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo [at] vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Sharif Islam [ Mo, 11 Oktober 2004 18:16 ] [ ID #162519 ]

Re: recreate superblock? (was Re: mdadm: failed to RUN_ARRAY)

Hi Sharif,

>From the kernel log, I believe that the raid5 array now has 4 working
disks out of 6 raid disks. My suggestion is to recreate the array.

Before recreating the MD array, I would suggest that you run mdadm
--examine /dev/hdxx (i.e. hde1, hdf1 or hdg1) and write down the
information of the original array setup.

Some of the important Raid5 info are : Raid Devices, Total Devices,
Preferred Minor (/dev/mdx), Spare Devices, Layout, Chunk Size, order of
devices.

When you **decide** to use mdadm -C to recreate array, mdadm will ask
you to confirm on overwriting existing md superblock.

Good luck!
Mike T.


> /dev/hdi is one of the bad one, but I think it is not totally gone. So I
> was trying to force it.
>
> [root [at] pep root]# mdadm -A /dev/md0 --force /dev/hdefghi]1
> mdadm: no RAID superblock on /dev/hdi1
> mdadm: /dev/hdi1 has no superblock - assembly aborted
>
> I would appreciate some hints. Thanks.
> -Sharif
>
>
> Sharif Islam wrote:
>
> > Ok, more on this. I think two of my RAID drives failed. and after I
> > posted the email below, I did something that I am guessign I shouldn't
> > have.
> >
> > mdadm --zero-superblock /dev/hdk1
> > mdadm --zero-superblock /dev/hdi1
> >
> > Should I try recreating the array in degrading mode.
> >
> > mdadm --create /dev/md0 --level=5 /dev/hde1 /dev/hdf1 /dev/hdg1
> > /dev/hdh1 missing
> >
> > But I think that only works with one failed device.
> > -Sharif
> >
> >
> > Sharif Islam wrote:
> >
> >> One of my RAID 5 stopped last night. I get this when I try to
> >> restart it:
> >> [root [at] pep raid]# mdadm -A /dev/md0
> >> mdadm: device 5 in /dev/md0 has wrong state in superblock, but
> >> /dev/hdk1 seems ok
> >> mdadm: failed to RUN_ARRAY /dev/md0: Invalid argument
> >>
> >> Is my hdk1 gone. Should I try to replace that?
> >>
> >> from dmesg:
> >>
> >> Oct 9 10:36:04 pep kernel: md: export_rdev(hdf1)
> >> Oct 9 10:36:04 pep kernel: [events: 0000000b]
> >> Oct 9 10:36:04 pep kernel: md: bind<hdf1,1>
> >> Oct 9 10:36:04 pep kernel: [events: 0000000b]
> >> Oct 9 10:36:04 pep kernel: md: bind<hdg1,2>
> >> Oct 9 10:36:04 pep kernel: [events: 0000000b]
> >> Oct 9 10:36:04 pep kernel: md: bind<hdh1,3>
> >> Oct 9 10:36:04 pep kernel: [events: 00000009]
> >> Oct 9 10:36:04 pep kernel: md: bind<hdi1,4>
> >> Oct 9 10:36:04 pep kernel: [events: 0000000a]
> >> Oct 9 10:36:04 pep kernel: md: bind<hdk1,5>
> >> Oct 9 10:36:04 pep kernel: [events: 0000000b]
> >> Oct 9 10:36:04 pep kernel: md: bind<hde1,6>
> >> Oct 9 10:36:04 pep kernel: md: hde1's event counter: 0000000b
> >> Oct 9 10:36:04 pep kernel: md: hdk1's event counter: 0000000a
> >> Oct 9 10:36:04 pep kernel: md: hdi1's event counter: 00000009
> >> Oct 9 10:36:04 pep kernel: md: hdh1's event counter: 0000000b
> >> Oct 9 10:36:04 pep kernel: md: hdg1's event counter: 0000000b
> >> Oct 9 10:36:04 pep kernel: md: hdf1's event counter: 0000000b
> >> Oct 9 10:36:04 pep kernel: md: superblock update time inconsistency
> >> -- using the most recent one
> >> Oct 9 10:36:04 pep kernel: md: freshest: hde1
> >> Oct 9 10:36:04 pep kernel: md: kicking non-fresh hdi1 from array!
> >> Oct 9 10:36:04 pep kernel: md: unbind<hdi1,5>
> >> Oct 9 10:36:04 pep kernel: md: export_rdev(hdi1)
> >> Oct 9 10:36:04 pep kernel: md0: removing former faulty hdi1!
> >> Oct 9 10:36:04 pep kernel: md0: kicking faulty hdk1!
> >> Oct 9 10:36:04 pep kernel: md: unbind<hdk1,4>
> >> Oct 9 10:36:04 pep kernel: md: export_rdev(hdk1)
> >> Oct 9 10:36:04 pep kernel: md: md0: raid array is not clean --
> >> starting background reconstruction
> >> Oct 9 10:36:04 pep kernel: md0: max total readahead window set to 2560k
> >> Oct 9 10:36:04 pep kernel: md0: 5 data-disks, max readahead per
> >> data-disk: 512k
> >> Oct 9 10:36:04 pep kernel: raid5: device hde1 operational as raid
> >> disk 0
> >> Oct 9 10:36:04 pep kernel: raid5: device hdh1 operational as raid
> >> disk 3
> >> Oct 9 10:36:04 pep kernel: raid5: device hdg1 operational as raid
> >> disk 2
> >> Oct 9 10:36:04 pep kernel: raid5: device hdf1 operational as raid
> >> disk 1
> >> Oct 9 10:36:04 pep kernel: raid5: not enough operational devices for
> >> md0 (2/6 failed)
> >> Oct 9 10:36:04 pep kernel: RAID5 conf printout:
> >> Oct 9 10:36:04 pep kernel: --- rd:6 wd:4 fd:2
> >> Oct 9 10:36:04 pep kernel: disk 0, s:0, o:1, n:0 rd:0 us:1 dev:hde1
> >> Oct 9 10:36:04 pep kernel: disk 1, s:0, o:1, n:1 rd:1 us:1 dev:hdf1
> >> Oct 9 10:36:04 pep kernel: disk 2, s:0, o:1, n:2 rd:2 us:1 dev:hdg1
> >> Oct 9 10:36:04 pep kernel: disk 3, s:0, o:1, n:3 rd:3 us:1 dev:hdh1
> >> Oct 9 10:36:04 pep kernel: disk 4, s:0, o:0, n:4 rd:4 us:1 dev:[dev
> >> 00:00]
> >> Oct 9 10:36:04 pep kernel: disk 5, s:0, o:0, n:5 rd:5 us:1 dev:[dev
> >> 00:00]
> >> Oct 9 10:36:04 pep kernel: raid5: failed to run raid set md0
> >> Oct 9 10:36:04 pep kernel: md: pers->run() failed ...
> >> Oct 9 10:36:04 pep kernel: md: md0 stopped.
> >> Oct 9 10:36:04 pep kernel: md: unbind<hde1,3>
> >> Oct 9 10:36:04 pep kernel: md: export_rdev(hde1)
> >> Oct 9 10:36:04 pep kernel: md: unbind<hdh1,2>
> >> Oct 9 10:36:04 pep kernel: md: export_rdev(hdh1)
> >> Oct 9 10:36:04 pep kernel: md: unbind<hdg1,1>
> >> Oct 9 10:36:04 pep kernel: md: export_rdev(hdg1)
> >> Oct 9 10:36:04 pep kernel: md: unbind<hdf1,0>
> >> Oct 9 10:36:04 pep kernel: md: export_rdev(hdf1)
> >>
> >> [root [at] pep raid]# cat /proc/mdstat
> >> Personalities : [raid5]
> >> read_ahead not set
> >> unused devices: <none
> >>
> >> Thanks.
> >> -Sharif
> >>
> >>
> >
> >
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo [at] vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo [at] vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Mike Tran [ Mo, 11 Oktober 2004 15:42 ] [ ID #162523 ]

Re: recreate superblock? (was Re: mdadm: failed to RUN_ARRAY)

Mike Tran wrote:

>Hi Sharif,
>
>>From the kernel log, I believe that the raid5 array now has 4 working
>disks out of 6 raid disks. My suggestion is to recreate the array.
>
>Before recreating the MD array, I would suggest that you run mdadm
>--examine /dev/hdxx (i.e. hde1, hdf1 or hdg1) and write down the
>information of the original array setup.
>
>Some of the important Raid5 info are : Raid Devices, Total Devices,
>Preferred Minor (/dev/mdx), Spare Devices, Layout, Chunk Size, order of
>devices.
>
>When you **decide** to use mdadm -C to recreate array, mdadm will ask
>you to confirm on overwriting existing md superblock.
>
>
Thanks for your response. Will it delete the existing the data on the RAID?

-Sharif

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo [at] vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Sharif Islam [ Mo, 11 Oktober 2004 21:08 ] [ ID #162524 ]

Worked!! Re: recreate superblock? (was Re: mdadm: failed to RUN_ARRAY)

Thanks. This worked. I am back with degraded mode.

mdadm --create --verbose /dev/md5 --level=5 --chunk=128
--raid-devices=6 /dev/hde1 /dev/hdf1 /dev/hdg1 /dev/hdh1 /dev/hdi1 missing

Sharif Islam wrote:

> Mike Tran wrote:
>
>> Hi Sharif,
>>
>>> From the kernel log, I believe that the raid5 array now has 4 working
>>
>> disks out of 6 raid disks. My suggestion is to recreate the array.
>>
>> Before recreating the MD array, I would suggest that you run mdadm
>> --examine /dev/hdxx (i.e. hde1, hdf1 or hdg1) and write down the
>> information of the original array setup.
>> Some of the important Raid5 info are : Raid Devices, Total Devices,
>> Preferred Minor (/dev/mdx), Spare Devices, Layout, Chunk Size, order of
>> devices.
>> When you **decide** to use mdadm -C to recreate array, mdadm will ask
>> you to confirm on overwriting existing md superblock.
>>
>>
> Thanks for your response. Will it delete the existing the data on the
> RAID?
>
> -Sharif
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo [at] vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo [at] vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html  
<think>嗯,用户遇到了“centos7.5-bash: mdadm: 未找到命令”的问题,这应该是mdadm工具没有安装。首先,我需要确认mdadm在CentOS中的安装包名称,通常就是mdadm本身。然后,用户可能没有安装它,或者安装后路径不在环境变量里,但后者可能性较低,因为系统一般会处理路径问题。 接下来,我应该分步骤引导用户解决问题。第一步是检查是否安装了mdadm,使用rpm -q mdadm命令。如果未安装,就需要用yum安装。考虑到用户可能没有root权限,需要提醒使用sudo或者切换到root。另外,如果系统没有联网,可能需要配置本地源或者使用离线安装方法。 然后,安装完成后,验证是否成功,再次运行mdadm命令。如果有其他问题,比如命令存在但无法执行,可能需要检查依赖项或者文件权限,但这种情况比较少见。最后,可能需要初始化软件阵列,但用户当前的问题只是命令未找到,所以暂时不需要深入配置部分。 另外,要提醒用户如果安装后仍然有问题,可能需要检查安装过程中的错误信息,或者手动添加PATH环境变量,不过可能性不大。确保回答结构清晰,分步骤说明,让用户容易理解。同时,使用中文,避免术语混淆,保持口语化但专业。</think>遇到 `bash: mdadm: 未找到命令` 错误时,说明当前系统中未安装 `mdadm` 工具。以下是逐步解决方案: --- ### **步骤 1:理解问题原因** `mdadm` 是 Linux 下管理软件 RAID 的核心工具。在 CentOS 7.5 中,默认可能未预装此工具,需要通过包管理器手动安装。 --- ### **步骤 2:验证是否已安装** 在终端执行以下命令,检查是否已安装 `mdadm`: ```bash rpm -q mdadm ``` - 若输出类似 `mdadm-4.1-1.el7.x86_64`,表示已安装。 - 若输出 `未安装软件包 mdadm`,则需手动安装。 --- ### **步骤 3:安装 mdadm** 使用 `yum` 包管理器安装(需 root 权限): ```bash # 普通用户切换为 root su - # 或使用 sudo(需已配置 sudo 权限) sudo yum install mdadm -y ``` --- ### **步骤 4:验证安装结果** 安装完成后,检查命令是否可用: ```bash mdadm --version ``` 正常输出应类似: ``` mdadm - v4.1 - 2023-xx-xx ``` --- ### **步骤 5:额外注意事项** 1. **离线环境安装** 若服务器无互联网连接,需提前下载 `mdadm` 的 RPM 包(通过 CentOS 镜像站获取),手动安装: ```bash rpm -ivh mdadm-*.rpm ``` 2. **最小化安装系统** 如果系统是最小化安装,可能缺少依赖包。建议通过完整镜像源更新: ```bash yum update ``` --- ### **总结** - 未找到 `mdadm` 的直接原因是未安装该软件包。 - 通过 `yum install mdadm` 安装后即可正常使用。 - 如果安装后仍报错,检查环境变量 `$PATH` 是否包含 `/sbin` 目录(可通过 `echo $PATH` 查看)。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值