To change the file options for original image resolution and high fidelity, go to “Options” and make sure the following options are selected in the “Image Size and Quality” of the “Advanced” tab:
Francho Nerín Fonz
Molecular Modeling and Drug Design |
|
Before starting to add elements to your PowerPoint file, the options of the file in question must be appropriately set to save all pasted images in their original resolution and high fidelity. If you have a PowerPoint file with images in it already, change the file options and then import/paste the original images again into the document to preserve their original resolution, as the images that were imported before setting these quality options have already been compressed. To change the file options for original image resolution and high fidelity, go to “Options” and make sure the following options are selected in the “Image Size and Quality” of the “Advanced” tab: Finally, remember to double-check that you are saving/exporting your PowerPoint file to other formats maintaining the image quality. When “Saving as”, select “Tools” → “Compress pictures” options and ensure that “High fidelity” is selected before saving. P.S.: to interconvert between image formats without losing quality, my first choice is https://www.zamzar.com/, although it has some size/amount of conversions limitations in its free version.
Francho Nerín Fonz
0 Comments
- Problem
I recently noted one of the systems we are simulating suffered from decreasing performance as the simulation progressed. Further investigation revealed it is not the actual performance of the simulation that was lower, but the fact that PLUMED needed increasingly large amounts of time to pre-process the HILLS file for each replica. This is not an issue for a short-running simulation but for one where the simulation time runs into several μs (such as Coarse-grained simulations), it can completely hinder the trajectory from progressing past a certain point in time. - Solution Comparisons with other systems and similar issues mentioned in the issues of the PLUMED github repository, as well as in the PLUMED mailing list, pointed in the direction of a grid-related setting being the cause of the delay, specifically, the fineness/coarseness of the grid. The two files below (CORRECT PLUMED INPUT FILE & OLD PLUMED INPUT FILE) highlight a different way of specifying the grid density. In the `old` file, the grid spacing was manually specified with the GRID_SPACING argument in the METAD block. In the `correct` file, that line is absent from the file. In the absence of a GRID_SPACING or a GRID_BIN argument, PLUMED is going to use a grid spacing value equal to 1/5 of the Collective Variable (CV) width, for each CV, espectively. Further testing is required but this value appears to be a robust default and has solved this issue in this particular instance. CORRECT PLUMED INPUT FILE C1: RMSD REFERENCE=rmsd_reference.pdb TYPE=OPTIMAL COM1: CENTER ATOMS=1-122 COM2: CENTER ATOMS=123-244 D1: DISTANCE ATOMS=COM1,COM2 METAD ... ARG=C1,D1 SIGMA=0.4,0.8 HEIGHT=0.005 PACE=100 LABEL=meta BIASFACTOR=2.0 TEMP=300 GRID_MIN=0,1 GRID_MAX=4.5,6 ... METAD UPPER_WALLS ARG=C1 AT=4 KAPPA=300.0 EXP=2 EPS=1 OFFSET=0 LABEL=uwall UPPER_WALLS ARG=D1 AT=5.5 KAPPA=300.0 EXP=2 EPS=1 OFFSET=0 LABEL=u2wall # monitor the two variables and the metadynamics bias potential PRINT STRIDE=10000 ARG=C1,D1,meta.bias FILE=COLVAR OLD INPUT PLUMED FILE (do not use!) C1: RMSD REFERENCE=rmsd_reference.pdb TYPE=OPTIMAL COM1: CENTER ATOMS=1-122 COM2: CENTER ATOMS=123-244 D1: DISTANCE ATOMS=COM1,COM2 METAD ... ARG=C1,D1 SIGMA=0.4,0.8 HEIGHT=0.005 PACE=100 LABEL=meta BIASFACTOR=2.0 TEMP=300 GRID_MIN=0,1 GRID_MAX=4.5,6 GRID_SPACING=0.01,0.01 ... METAD UPPER_WALLS ARG=C1 AT=4 KAPPA=300.0 EXP=2 EPS=1 OFFSET=0 LABEL=uwall UPPER_WALLS ARG=D1 AT=5.5 KAPPA=300.0 EXP=2 EPS=1 OFFSET=0 LABEL=u2wall # monitor the two variables and the metadynamics bias potential PRINT STRIDE=10000 ARG=C1,D1,meta.bias FILE=COLVAR - Problem
Files/folders might become corrupted when transferring between computers or even disks, in a way that is non obvious to your code so it won't throw a warning, but might still affect the validity of your data. - Solution This problem can occur when transferring files between one cluster and another but it can also occur at any point when file transfer is taking place, i.e. * Transfer file from internal to external disk drive * Transfer file from one internal disk drive to another internal disk drive * Transfer file from one computer to another over the network It can even take place when no operations are being performed on the file at all. This is called bit rot and data centers that specialise in archival and where data integrity is of high importance, employ specialised hardware and software to detect and correct it. For our purposes, what we can do is focus on best practices when downloading or uploading files from or to a location. This boils down to two things: (1) Instead of transferring multiple small files and folders it is better to transfer a single item instead. This can be achieved with a command like `tar -czf directory.tgz directory` if we are interested in transferring a single directory but can, of course, accomodate as many folders as we require. For the next step we need a way of generating a unique "identity" for the tgz archive. For this we can use a checksum. One way of computing one can be seen in the command below: `md5sum directory.tgz` This will print a string of alphanumeric characters (the aforementioned "identity" of the file) followed by white-space and the filename. The output of the command can be stored in a file for easier comparison. After transferring the file to the destination we can run the `md5sum` command there as well and verify the hashes are identical. An added benefit of transferring data in a single archive is that it is faster, as our file transfer program of choice (e.g. `scp`, or `rsync`) only needs to negotiate a single connection. (2) Alternatively, assuming it doesn't make sense to bundle our data in a single archive, we can run `md5sum` on all files to be transferred and compare all of the checksums before and after the transfer. This can be achieved in many ways but one command that would do the trick is: `find -L . -type f | xargs md5sum | sort -Vk2` This can be run from a location that contains all of the files you want to transfer. An short explanation about the various flags follows: `-L`: This instructs `find` to follow symlinks `.`: This instructs `find` to search in the current directory `-type f`: This instructs `find` to only identify files `xargs md5sum`: Run `md5sum` on all detected files `sort -Vk2`: Sort the results lexicographically by file name to avoid differences in the default sorting order due to location settings The two files produced by the above command (before and after transfer), can be compared to ensure the transferred files are identical. Panos sudo apt-get update
sudo apt-get clean sudo apt-get update sudo apt-get upgrade sudo apt-get dist-upgrade You can also try: sudo apt full-upgrade Ubuntu 20.04 start screen frozen at system check (Ctrl-C to stop system checks does not work)12/15/2021 1) Install the recommended NVIDIA drivers for your graphics card:
sudo add-apt-repository ppa:graphics-drivers/ppa ubuntu-drivers devices sudo apt install [driver_name] or sudo ubuntu-drivers autoinstall sudo reboot 2) you may have to uninstall all nvidia drivers first dpkg -l | grep -i nvidia sudo apt-get remove --purge '^nvidia-.*' sudo apt-get install ubuntu-desktop sudo reboot 3) Disable nvidia-drm modeset option I discovered that prime-select writes a configuration file which causes the problem. It enables the nvidia-drm modeset option. You can simply undo the change made by prime-select by commenting out this option. It will not be reset, because prime-select only writes this file when it does not exist yet." Open the file in your favorite editor (vim, nano, gedit, etc.). sudo nano /lib/modprobe.d/nvidia-kms.conf And comment out the the nvidia-drm modeset option. # This file was generated by nvidia-prime # Set value to 0 to disable modesetting # options nvidia-drm modeset=1 LiPyphilic is a Python package for analyzing such simulations. Analysis tools in LiPyphilic include the identification of cholesterol flip-flop events, the classification of local lipid environments, and the degree of interleaflet registration. LiPyphilic is both force field- and resolution-agnostic, and by using the powerful atom selection language of MDAnalysis, it can handle membranes with highly complex compositions. LiPyphilic also offers two on-the-fly trajectory transformations to (i) fix membranes split across periodic boundaries and (ii) perform nojump coordinate unwrapping. Implementation of nojump unwrapping accounts for fluctuations in the box volume under the NPT ensemble—an issue that most current implementations have overlooked. The full documentation of LiPyphilic, including installation instructions and links to interactive online tutorials, is available at https://lipyphilic.readthedocs.io/en/latest.
Publication can be found here: https://pubs.acs.org/doi/10.1021/acs.jctc.1c00447 When mounting a new disk on Ubuntu the default permissions are:
|
Alexis, Maria, Dimitra, Ioannis, Michalis, Danai, Panos, George, Aspa, Zoelab group members! Click to set custom HTML
Archives
April 2024
Categories
All
|