跳到主要內容

Works at Academia Sinica



384 DNA synthesizer

384 oligonucleotide Synthesizer: PC based systems with GUI sends commands to Parker AT6250 motion control card and IO module to move nozzle seat for reagent injection in X-Y direction via two stepper motors according to the programmed coordinate, and then turn on the solenoid valves to inject a variety of chemical reagents into reaction wells. These synthesizers consume less reagents and synthesize more (384 or 1536) sequences of oligonucleotide at single operation. Their productivity outnumbers any commercial ones.






1536 DNA synthesizer


1536 oligonucleotide Synthesizer: Four modules of 384 oligonucleotide synthesizer were built together to boost the productivity to 1536 custom oligonucleotide in each synthesis operation. Implementation of tile injection nozzles and 4 times of solenoid valves to achieve simultaneous synthesis eliminate the need of 4-fold of time for synthesis.






Micro Arrayer


Micro Arrayer: A PC based system with GUI to set up parameters like number of spotting pin, pitch, copies and substrate material, sends commands to 2 Parker AT6250 motion cards and IO module to control a robotic arm and XYZ positioning systems. The former one is to hold, deliver and place the sample tray in or out of storage rack array to XYZ positioning system. Via XYZ system, spotting pins would dip the oligonucleotide sample in plate and spot on slide or membrane to deploy the oligonucleotide probes for later hybridization with fluorescent labeled targets.







In situ synthesizer


In-Situ Synthesizer: Simultaneous synthesis and arraying of DNA chip saves time and chemical reagents when few chips are needed only. First of all, we perform surface modification on slides; following the synthesis of oligonucleotide on spin stage. We address the points with volume of deblocking reagent in pico-liter scale by piezo-electric pipettes to attain high density. Then the spin stage would spin and coat the slide while reagent being sprayed to accelerate the synthesis process of oligonucleotides.










Sensitive Protein Detector with Aptamers by GNP Resonance Light Scattering



Under exposure to light source, intensity of light scattering from gold nanoparticles (GNP) would increase dramatically with the increment of its diameter size in the power of 6 within Rayleigh range. At the beginning, we conjugate TBA3 and TBA5 to the surface of GNPs respectively. Then a reshaped beam emitted from 638nm laser in an optical system passes through solution with TBA3-GNP and TBA5-GNP complexes in the molecular ratio of 1:1 in a cuvette. A photodiode collects and converts scattering light into voltage as background signal to DAQ connecting to PC. Thrombin is added and reacts to TBA3 and TBA5 conjugated to GNP to form bigger complex by aggregation for their affinity. The change of its diameter size would boost the intensity of scattering light and result in the augmented voltage signal. To maximize detection sensitivity, GNP with diameter of 60nm is adopted for its higher differential light scattering amplitude at 638 nm before and after aggregation.



留言

這個網誌中的熱門文章

DeepRacer

Preliminary training: deepracer-github-simapp.tar.gz Reward function: ./opt/install/sagemaker_rl_agent/lib/python3.5/site-packages/markov/environments/deepracer_env.py action = [steering_angle, throttle] TRAINING_IMAGE_SIZE = (160, 120) Plotted waypoints in vertices array of hard track Parameters: on_track, x, y, distance_from_center, car_orientation, progress, steps,                                                                          throttle, steering, track_width, waypoints, closest_waypoints Note: Above picture is from https://yanpanlau.github.io/2016/10/11/Torcs-Keras.html

增強式學習

   迴力球遊戲-ATARI     賽車遊戲DQN-ATARI 賽車遊戲-TORCS Ref:     李宏毅老師 YOUTUBE DRL 1-3 On-policy VS Off-policy On-policy     The agent learned and the agent interacting with the environment is the same     阿光自已下棋學習 Off-policy     The agent learned and the agent interacting with the environment is different     佐助下棋,阿光在旁邊看 Add a baseline:     It is possible that R is always positive     So R subtract a expectation value Policy in " Policy Gradient" means output action, like left/right/fire gamma-discounted rewards: 時間愈遠的貢獻,降低其權重 Reward Function & Action is defined in prior to training MC v.s. TD MC 蒙弟卡羅: critic after episode end : larger variance(cuz conditions differ a lot in every episode), unbiased (judge until episode end, more fair) TD: Temporal-difference approach: critic during episode :smaller variance, biased maybe atari : a3c  ...