[Bioclusters] error while running mpiblast
Joe Landman
landman at scalableinformatics.com
Wed Mar 2 07:58:24 EST 2005
How did you run mpiformatdb? What command line options did you use?
Where are your blossum et al stored?
On Wed, 2 Mar 2005, kalyani goli wrote:
> HI all!
> I gave Shared and Local same and able to run mpiBlast1.3.0. I did nt
> get exactly what u are saying .Whether to continue like this or change
> the Local ..
>
> Iam getting the following error repeated for 200 times while running
> mpiblast with the following command. Iam attaching the sequence file.
> Could u help me in telling where could be the error. Iam able to
> execute other sequence files with the same command , same database and
> same database fragments and indexes.
>
> time mpirun -np 4 ~/bin/mpiblast -p blastp -d pir.fasta -i seqnew2 -o
> mpiblastoutput2.txt
>
> [blastall] ERROR: ncbiapi [000.000] ObjMgrNextAvailEntityID failed
> with idx 2048
>
>
>
> On Wed, 2 Mar 2005 00:59:11 -0500 (EST), James Cuff <jcuff at broad.mit.edu> wrote:
> > On Wed, 2 Mar 2005, Joe Landman wrote:
> >
> > > it is quite possible that mpiblast will scale better than NCBI blast
> > > on this system. mpi forces you to pay attention to locality of
> > > reference, so you tend to do a good job partitioning your code (that is,
> > > if it scales). NCBI is built with pthreads, and I haven't seen it scale
> >
> > *snip*
> >
> > See - I told you that Joe knew his stuff... (old school, with a touch of
> > new)
> >
> > > Lucas sent me a note indicated that in 1.3.0 they allow for shared and
> > > local to coexist. Aaron/Lucas, if you are about, could you clarify some
> > > of this? I don't want to lead people astray (and I will need to update
> > > the SGE tool).
> >
> > *blush* it did actually work for me with (local/local) on our cluster, but
> > it did keep moaning about this darn thing called 'blossom'... if only I
> > knew what a 62 year old flower had to do with genome analysis ;-)
> >
> > Sorry in all seriousness, a couple of weeks ago, I pushed through one of
> > "them there lazy, throw in a genome get out an answer, I just can't be
> > bothered to chunk and overlap this sucker" problems on 200 nodes. I got
> > about 147 nodes worth of 'throughput', but I got the answer in really
> > short time, well 147 times shorter (factoring in my lazyness) to be
> > precise :-).
> >
> > mpiblast works. Really very well for certain problems. There I said it.
> >
> > Guy and Tim will probably never forgive me... I think I may have been the
> > original 'embarrassingly parallel is the only way, nothing else will ever
> > give the throughput, yada, yada' advocate...
> >
> > > Note: We have not built the mpiblast RPM for Itanium (nor for that
> > > matter, any of our RPMs). Is there any interest in this? Curious.
> >
> > Shame they cost so darn much, well ours do, but folk keep demanding me to
> > cram 64GB in them for something called whole genome assembly. I just
> > can't for the life of me understand why they cost so much :-)
> >
> > Best,
> >
> > J.
> >
> > --
> > James Cuff, D. Phil.
> > Group Leader, Applied Production Systems
> > Broad Institute of MIT and Harvard. 320 Charles Street,
> > Cambridge, MA. 02141. Tel: 617-252-1925 Fax: 617-258-0903
> >
> > _______________________________________________
> > Bioclusters maillist - Bioclusters at bioinformatics.org
> > https://bioinformatics.org/mailman/listinfo/bioclusters
> >
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: seqnew2
Type: application/octet-stream
Size: 42488 bytes
Desc:
Url : http://bioinformatics.org/pipermail/bioclusters/attachments/20050302/28806dc3/seqnew2-0001.obj
-------------- next part --------------
_______________________________________________
Bioclusters maillist - Bioclusters at bioinformatics.org
https://bioinformatics.org/mailman/listinfo/bioclusters
More information about the Bioclusters
mailing list