[Bioclusters] Memory Usage for Blast - question

Hrishikesh Deshmukh hdeshmuk at gmail.com
Wed Mar 9 17:50:37 EST 2005


Hi,

You haven't told how long each sequence is!, you can tweak wordsize -W
to make it faster but then BLAST becomes less sensitive.I suggest you
take a look at the book BLAST by Ian Korf et al.

Thanks,
Hrishi


On Wed, 09 Mar 2005 16:09:41 -0600, Dinanath Sulakhe
<sulakhe at mcs.anl.gov> wrote:
> Hi,
> I am not sure if this is the right place to ask this question !!
> I am running Blast (NCBI) parallely on a cluster with 80 nodes. (I am
> running NCBI NR against Itself). Each node is a dual processor.
> 
> I am using Condor to submit the jobs to this cluster. The problem I am
> coming across is, whenever two blast jobs (each blast job has 100
> sequences) are assigned on One node (one on each processor), the node
> cannot handle the amount of memory used by the two blast jobs. PBS mom
> daemon on the nodes cannot allocate the memory they need to monitor the
> jobs on the node and they fail, thus killing the jobs.
> 
> Condor doesn't recognize this failure and assumes that the job was
> successfully completed, but actually only few sequences get processed
> before the job is killed.
> 
> Now the Admin of the Site is asking me if its possible to reduce the amount
> of memory these blast jobs use? He says these jobs are requesting about
> 600-700MB of RAM, and he is asking me to reduce it to atmost 500MB.
> 
> Is it possible to reduce the amount of RAM it is requesting by tweaking any
> of the parameters in blast??
> 
> My blast options are :
> 
> blastall -i $input -o $output -d $db -p blastp -m 8 -F F
> 
> Please let me know,
> Thank you,
> Dina
> 
> _______________________________________________
> Bioclusters maillist  -  Bioclusters at bioinformatics.org
> https://bioinformatics.org/mailman/listinfo/bioclusters
>


More information about the Bioclusters mailing list