<div class="gmail_quote">Hi Ioannis<br><br>Well I'm impressed that you could compile tinker, nice! and also glad to have a "customer" after all this time ;)<br><br>So your questions one by one:<br><br>1) cns (xplor-cns). I never tried it but it was the thing I always wanted to do... I think it is much better than tinker and anyway it is the standard for NMR these days, so if anything at least you get structures that are similar to NMR structures in PDB. I guess it's not complicated to write an adapter for it with the code we have... if I only had the time! ;)<br>
<br>2) tinker 6. Didn't try either, but the dynamic memory allocation is THE feature that we needed. Did you try compiling it? or even using their provided binaries with reconstruct? That in principle should solve all problems we have with Tinker <5<br>
<br>3) reconstruct output: rec.pdb is the elongated pdb (use as input for distgeom), rec.report is the report with rmsds and so on so this one you should look at. Otherwise the important ones are the final models rec.xxx.pdb<br>
<br>4) other parameters: -e you should definitely use, otherwise you get crap models that are not even accepted by the CASP server. For -f (force constant) I'd use the default. BTW don't use fast mode, always use simulated annealing<br>
<br>5) third column: reconstruct ignores it as long as I remember. It wouldn't be difficult to implement assigning different force constants for different edge weights. But what we don't know is what kind of mapping weight-to-force-constant is reasonable... that part would require quite some time to optimise<br>
<br>6) sampling: our rule was 40 models minimum (anyway half of them are mirrors), but of course the more the better. Indeed there's a random seed for each model, so in principle there's no difference 1x20 vs 10x2. One thing I'd recommend to use is the cluster feature (-A). That was working quite well and one gets linear scaling (one model per cpu)<br>
<br>Well that's about it. Let me know how it goes, especially if you try tinker 6. I'd be interested to know if that works at all.<br><br>Cheers<span class="HOEnZb"><font color="#888888"><br><br>Jose</font></span><div class="HOEnZb">
<div class="h5"><br><br><br><br><br><div class="gmail_quote">On 25 January 2012 13:46, Ioannis Filippis <span dir="ltr"><<a href="mailto:filippis@gmail.com" target="_blank">filippis@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi guys,<br><br>I have been mailing already Jose about reconstruction. <br><br>I managed to compile tinker 5.1.09 and run reconstruction-1.0 and worked fine!!! The only problem with compiling in 64 (when increasing the parameters in sizes.i) was some error message "Relocation truncated to fit" that I overcame using -mcmodel=medium in compiling and linking.<br>
<br>I have a few questions:<br>-First, what all these files in output dir are? I guess I only need the rec.XXX.pdb files as the predicted models<br>ld-mjeste16 109% ls test/<br>rec.001 rec.001.pdb rec.key rec.pdb rec.report rec.seq rec.tinker.log rec.xyz<br>
-Second, what parameters do you usually use? I guess all default settings but I am not sure about -e and -f<br>-Third, does tinker take into account the last column in cm file? So if I have contact predictions with confidence/probabilities, will this be taken into account? If not, any simple way to enforce it?<br>
-How many models have to be generated to get a good sampling of output? Any difference between running reconstruct once and generating 20 models or running it 10 times and generating 2models each time (some seed somewhere)?<br>
-Any approximate run times?<br><br>I guess I have heard all of these before but don't really remember!<br><br>Many thanks!<br><br>Best,<br>Ioannis<br><br></blockquote></div></div></div></div><br>