Jekyll2020-05-13T14:09:44-04:00http://tuckersylvia.com/feed.xmlProceptive InsightsMy personnal space on the web to blog about problems I solve and other tidbits worth sharing.Tucker Sylvia3D Printed Bathymetric Charts for the Visually Impaired2019-10-05T08:46:00-04:002019-10-05T08:46:00-04:00http://tuckersylvia.com/2019/10/05/3d-bathymetry<h3 id="harnessing-open-source-software-and-3d-printing-technologies-to-create-adaptive-solutions-for-impaired-and-disabled-communities">Harnessing open source software and 3D printing technologies to create adaptive solutions for impaired and disabled communities.</h3>
<div>
<figure>
<img src="/assets/posts/2019-10-05-3d-printed-bathymetric-charts-for-the-visually-impaired/matt-map.png" alt="3D Bathymetry" width="85%" />
<figcaption>Example of a 3D printed bathymetric chart of lower East Passage of Narragansett Bay</figcaption>
</figure>
</div>
<hr />
<h2 id="a-3d-picture-is-worth-way-more-than-a-thousand-words">A 3D picture is worth way more than a thousand words</h2>
<p>First post in a long while, but it only took a few minutes to remember the necessary commands needed to get the site up to date and create content. There’s something to be said for that kind of simplicity and I should document more about how this site works in future posts. Anyway…</p>
<h3 id="first-some-context">First, some context:</h3>
<p>I should start by stating that I have lived on an island for my entire life and always actively participated in water sports and appreciated my close proximity to the ocean. In fact, I consider this a dominant factor shaping how and why I have gotten to where I am and decisions I have made along the way.</p>
<p>I began sailing when I was 8 (somewhat begrudgingly which is hard to fathom now, 20 years later…) and that has continued to be my main sport and summer (and sometimes winter) activity. The summer before college was my first year as a sailing instructor, and I have worked for an adaptive sailing program, <a href="https://sailtoprevail.org/">Sail to Prevail</a> for 4-5 years in Newport and Nantucket, as well as <a href="https://sailnewport.org/">Sail Newport</a>, and additionally coach a high school sailing team. I have been pretty involved in sailing instruction for the past decade, which is part of how this whole project came to be, because this is how I met Matt.</p>
<p>I met Matt 2 or 3 years ago because he wanted to work on short-handed spinnaker sailing to prepare for his summer racing circuit. We would go out double-handed and do half day practices periodically. During one of our practice sessions he asked for clarification about the bathymetry where we sailed and its impact on surface current. The challenging part of coming up with an explanation was that Matt is blind, so he cannot read a typical chart. This was when it became clear that I could probably print a tactile chart for Matt on my 3D printer, a project that became a fantastic culmination of a few different passions of mine… hacking and tinkering, sailing and oceanography, teaching, and I was inspired.</p>
<ul>
<li>I wrote <a href="https://www.facebook.com/mattchaoblindsailor/posts/528277534246734?__tn__=K-R">another piece</a> about sailing with Matt you can find on his Facebook page.</li>
</ul>
<iframe src="https://www.facebook.com/plugins/post.php?href=https%3A%2F%2Fwww.facebook.com%2Fmattchaoblindsailor%2Fposts%2F528277534246734&width=500" width="500" height="198" style="border:none;overflow:hidden" scrolling="no" frameborder="0" allowtransparency="true" allow="encrypted-media"></iframe>
<hr />
<h3 id="back-to-the-actual-story">Back to the actual story:</h3>
<p>So, that was a long intro but it frames the rest of the struggle nicely. Initially I did not think that this would be all that challenging. There is a plethora of free and open bathymetry data out there and I assumed someone had come up with slick way to transform various digital elevation model formats into an STL that could then be sliced and printed. I had attempted this once before in grad school when I first got my printer because I thought we could make some neat teaching aids for a set of courses being designed to expose more non-STEM undergraduates to ocean data science. I was unsuccessful then, but now a few years had passed and surely there was an easy way to get this done…</p>
<p>The final solution ended up requiring multiple steps using an interesting combination of some cool software:</p>
<ul>
<li>python - common between most of the below mentioned tools
<ul>
<li>pyGDAL</li>
<li>pyNetCDF4</li>
<li>NumPy</li>
<li>MatPlotLib</li>
</ul>
</li>
<li><a href="https://qgis.org/en/site/">QGIS</a> - FOSS GIS package with a ton of functionality</li>
<li><a href="https://github.com/ChHarding/TouchTerrain_for_CAGEO">TouchTerrain</a> - converts GeoTIFF to STL</li>
<li>Cura - still my slicer of choice</li>
</ul>
<h4 id="get-data">Get Data:</h4>
<p>I obtained gridded bathymetry data from <a href="http://www.narrbay.org/physical_data.htm">NarrBay.org</a> which has a ton of data for the bay. I searched <a href="https://www.rigis.org/datasets/bathymetric-depth-contours-for-narragansett-bay">RIGIS</a> and <a href="https://data.noaa.gov/dataset/dataset/narragansett-bay-ri-m020-bathymetric-digital-elevation-model-30-meter-resolution-derived-from-s">NOAA</a> but could not find exactly what I thought I needed. Your mileage may vary for your region of choice. The zip contained an <em>*.e00</em> file, which I had never run across, so more Google Fu required. Apparently, it’s an old ArcInfo raster format and there are web services that will convert it to whatever you want, but the file was too big to use those for free.</p>
<h4 id="massage-data">Massage Data:</h4>
<p>It took me another couple of tries to figure out how to read and manipulate the <em>*.e00</em> raster file. First I tried QGIS, a very powerful and complete GIS solution with scripting interfaces and a bunch of plugins, and I really don’t understand all that it’s capable of, but it sort of worked for my task. I was able to open the <em>*.e00</em> file which is a gridded raster of the bathymetry with NaN’s for land (actually represented as the minimum value of a signed 32 bit integer). This is one area where I wanted to improve in future revisions because all the land masses and islands are presented as holes, which Matt said was somewhat unintuitive. After more Googling and exploring menus I figured out how to crop a region from the entire file and directly export it as a GeoTIFF. I tried changing the NaN / nodata regions a multitude of wys using the built-in raster calculator and transforming to a different layer with a new nodata value but could not get an array that worked correctly.</p>
<p>The next stage in the solution was to read and manipulate the data directly with python which actually turned out to work really well. GDAL can open the weird raster format as well as GeoTiff’s of various regions I had exported from QGIS, and I found a NetCDF of the data from NOAA that was way easier for me to work with because I use that format regularly. I was able to slice an ROI from a NumPY array, mask the NaN’s and change their value to whatever I wanted and dump the array back to the disk as a GeoTIFF in less than 20 lines in a notebook. I sould have known this was the right solution from the beginning. Now I just had to turn my DEM raster into something printable.</p>
<h4 id="dem--stl">DEM –> STL:</h4>
<p>I attempted to use a plugin called <a href="https://demto3d.com/en/">DEMto3D</a> but was unable to get it to work properly. Next came <a href="http://blog.touchterrain.org/">TouchTerrain</a> from the <a href="https://franek.public.iastate.edu/gfl/gfl.html">GeoFabLab</a> at Iowa State, <a href="https://www.sciencedirect.com/science/article/pii/S0098300416304824?via%3Dihub">here is the paper they wrote about it</a>. I had used the TouchTerrain <a href="https://touchterrain.geol.iastate.edu">web interface</a> in the past to print a model of <a href="https://en.wikipedia.org/wiki/Mount_Katahdin">Mt. Katahdin</a> but was unable to get it to work for negative topography because of the USGS DEM they use. As of the 2.0 release the developers also provide a standalone python version that you can get from <a href="https://github.com/ChHarding/TouchTerrain_for_CAGEO">their GitHub page</a> to run on your own data. This works perfectly and yields a binary (or ASCII) STL that you can then feed to your slicer of choice.</p>
<h3 id="print">Print!</h3>
<p>The results were impressive and I can’t wait to create more of these!</p>
<div>
<figure>
<img src="/assets/posts/2019-10-05-3d-printed-bathymetric-charts-for-the-visually-impaired/firstLayer.jpg" alt="First layer printing." width="85%" />
<figcaption>First layer of test print, looking promising.</figcaption>
</figure>
</div>
<div>
<figure>
<img src="/assets/posts/2019-10-05-3d-printed-bathymetric-charts-for-the-visually-impaired/baseLayers.jpg" alt="Base layers done printing." width="85%" />
<figcaption>All base layers nearly completed, good quality so far.</figcaption>
</figure>
</div>
<div>
<figure>
<img src="/assets/posts/2019-10-05-3d-printed-bathymetric-charts-for-the-visually-impaired/midProgress.jpg" alt="Mid-way print progress." width="85%" />
<figcaption>About half-way through the print, which was mostly infill so far, only a few deep regions filled in.</figcaption>
</figure>
</div>
<div>
<figure>
<img src="/assets/posts/2019-10-05-3d-printed-bathymetric-charts-for-the-visually-impaired/finalProduct.jpg" alt="Final product." width="85%" />
<figcaption>The final product, a 3x vertically exaggerated model of the lower east passsage of Narragansett Bay.</figcaption>
</figure>
</div>
<hr />
<p>Hopefully this post gave some information to those out there looking for ways to 3D print representations of digital elevation models for whatever purpose. For me, these types of models represent valuable teaching aids and provide a fantastic mechanism for assisting visually impaired people with understanding the world around them.</p>Tucker SylviaHarnessing open source software and 3D printing technologies to create adaptive solutions for impaired and disabled communities.I Wrote a Blob Tracker2018-04-16T15:44:00-04:002018-04-16T15:44:00-04:00http://tuckersylvia.com/2018/04/16/i-wrote-a-blob-tracker<h3 id="open-source-software-helps-students-and-researchers-solve-unique-problems-and-construct-purpose-built-solutions">Open source software helps students and researchers solve unique problems and construct purpose-built solutions.</h3>
<div>
<figure>
<img src="/assets/posts/2018-04-16-i-wrote-a-blob-tracker/programming.png" alt="Lenna" width="85%" />
<figcaption>Lenna is a staple test image within the computational image processing community.</figcaption>
</figure>
</div>
<hr />
<h2 id="a-picture-is-worth-at-least-thousand-words">A picture is worth at least thousand words</h2>
<p>Throughout the course of my masters research in subduction zone geodynamics at URI-GSO I was faced with challenges that all stemmed from one seemingly unrelated field of study: digital image processing. It turns out that many scientists and seemingly disparate fields rely on the same basic image processing principals to do their work. Photography is perhaps the original form of remote sensing. Modern satellite-based systems, aerial land surveys, time-lapse imagery of fluid dynamic experiments all utilize what boil down to bascially the same methods.</p>
<p>The general outline of my problem seemed pretty simple to me: tracking colored fluid blobs in 3D space and time within another working fluid using 2 DSLR cameras. Initially I assumed there would be an existing package or project that suited my needs, but I have yet to find it (please let me know of any other similar projects, I’m always interested to see how others do it). I was sure I would be writing or using software to automate much of the process as I had hundreds of gigabytes of raw experimental data to comb through and analyze, with each of my ~50 experiments usually consisting of hundreds of 4K/UHD frames from at least two cameras/angles.</p>
<p>Previous students had used manual or semi-manual approaches like <a href="http://www.arizona-software.ch/graphclick/">GraphClick</a> or <a href="https://physlets.org/tracker/">Tracker</a> that were pretty good but didn’t exactly suit my needs and were a little cumbersome. That said, these are both great ways to digitize data in 2D. I was also advised to check out <a href="http://www.civil.canterbury.ac.nz/streams.shtml">Streams</a> and <a href="http://pivlab.blogspot.com/">PIVLab</a> which were much closer to what I wanted but still did not facilitate registration of features between the two cameras. At this point I also had no clue about registration, features and their detection, calibration (yikes!), contrast and histogram equalization, color spaces, or basically any of the requisite knowledge I would need to solve this problem. I also knew of and had access to MATLAB and the image processing toolbox, but have always preferred open-source methods when possible, and Python is my favorite language.</p>
<p>In my first year I attended a workshop on analogue modeling that addressed many of the data collection and processing concerns I was having, and took a class in imaging and mapping that gave me an introduction and scaffolding of topics I would need to learn.</p>
<p>What I came up with after a lot of research and development was a set of scripts that accomplished each of the individual tasks in my pipeline that were controlled from a single driver script and could be run in batch on all of my data. Programming WIN!!! It did take me a few months to get this whole thing working, during which I probably could have processed my data by hand, but that would have been way lamer and I wouldn’t have learned nearly as much.</p>
<hr />
<h3 id="here-is-the-breakdown-of-the-processing-pipeline">Here is the breakdown of the processing pipeline:</h3>
<p>The entire pipeline is non-destructive, meaning at each step we save copies to prevent corrupting original data and allowing us to pick it up mid-pipeline if necessary. Also, for embarrasingly parralel tasks these scripts will run in a multi-threaded manner. The files are heavily commented and should be relatively self-explanatory.</p>
<p>The scripts rely on a directory structure like that depicted below with a directory named “original” containing the raw experimental images (currently expects JPEGS but should work with whatever format with little modification) and then create the rest of the folders and files:</p>
<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">EXP_DIR</span> <span class="c1"># top level experiment folder
</span> <span class="o">-</span> <span class="n">original</span> <span class="c1"># folder containing set of original images
</span> <span class="o">-</span> <span class="n">undistorted</span> <span class="c1"># folder where undistorted images will be dumped
</span> <span class="o">-</span> <span class="n">cropped</span> <span class="c1"># folder where cropped images live
</span> <span class="o">-</span> <span class="n">masked</span> <span class="c1"># folder where masked images are saved
</span> <span class="o">-</span> <span class="n">crop</span><span class="o">-</span><span class="n">corners</span><span class="o">.</span><span class="n">txt</span> <span class="c1"># file with saved crop corner coordinates
</span> <span class="o">-</span> <span class="n">saves</span><span class="o">-</span><span class="n">thresholds</span><span class="o">.</span><span class="n">txt</span> <span class="c1"># file with saved mask properties
</span> <span class="o">-</span> <span class="n">alltracks</span><span class="o">.</span><span class="n">csv</span> <span class="c1"># file containing all tracked particle trajectories
</span> <span class="o">-</span> <span class="n">goodtracks</span><span class="o">.</span><span class="n">csv</span> <span class="c1"># file containing filterd particle trajectories
</span> <span class="o">-</span> <span class="n">EXP_DIR</span><span class="o">-</span><span class="n">tracks</span><span class="o">.</span><span class="n">png</span> <span class="c1"># image with alltracks and goodtracks plotted
</span> <span class="o">-</span> <span class="n">EXP_DIR</span><span class="o">-</span><span class="n">HSV</span><span class="o">-</span><span class="n">hist</span><span class="o">.</span><span class="n">png</span> <span class="c1"># HSV channel histograms for tuning mask values
</span></code></pre></div></div>
<div>
<figure>
<img src="/assets/posts/2018-04-16-i-wrote-a-blob-tracker/original.jpg" alt="Original, unprocessed image." width="85%" />
<figcaption>Original experimental image before any processing.</figcaption>
</figure>
</div>
<hr />
<ul>
<li>Step #1 - Remove lens distortion. This functionality is implemented in
<a href="/assets/posts/2018-04-16-i-wrote-a-blob-tracker/batchLensCorrector.py">batchLensCorrector.py</a>
and
<a href="/assets/posts/2018-04-16-i-wrote-a-blob-tracker/myUndistort.py">myUndistort.py</a>
. These scripts to remove intrinsic lens distortion from camera images using the Lensfun database and EXIF data in batch. Borrowed and modified from <a href="https://github.com/wildintellect/lenscorrection">Python Lens Correction</a>. Distortion scripts depend on:
<ul>
<li>lensfunpy</li>
<li>cv2 (I used 3.1.0 and can’t confirm these will work with any other version)</li>
<li>os</li>
<li>multiprocessing</li>
<li>exiftool</li>
<li>timeit</li>
</ul>
<div>
<figure>
<img src="/assets/posts/2018-04-16-i-wrote-a-blob-tracker/undistorted.jpg" alt="Undistorted image." width="85%" />
<figcaption>Image with lens distortion removed.</figcaption>
</figure>
</div>
</li>
<li>Step #2 - Crop original frame to a sensible ROI (region of interest) using
<a href="/assets/posts/2018-04-16-i-wrote-a-blob-tracker/clickAndCrop.py">clickAndCrop.py</a>
and
<a href="/assets/posts/2018-04-16-i-wrote-a-blob-tracker/batchCrop.py">batchCrop.py</a>
. These depend on:
<ul>
<li>argparse</li>
<li>cv2</li>
<li>json</li>
<li>os</li>
<li>time</li>
<li>multiprocessing</li>
</ul>
<div>
<figure>
<img src="/assets/posts/2018-04-16-i-wrote-a-blob-tracker/cropped.jpg" alt="Cropped image." width="85%" />
<figcaption>Undistorted image cropped down to ROI.</figcaption>
</figure>
</div>
</li>
<li>Step #3 - Convert color spaces and threshold for quick and dirty segmentation of our blobs.
<a href="/assets/posts/2018-04-16-i-wrote-a-blob-tracker/hsvHist.py">hsvHist.py</a>
returns histograms of the hue, saturation, and value channels. From these we can isolate peaks or reasonable values for our mask. The thresholds on each channel (along with blur and disk-shaped morphological opening) are then tweaked and set with
<a href="/assets/posts/2018-04-16-i-wrote-a-blob-tracker/hsvThreshInteract.py">hsvThreshInteract.py</a>
which allows the user to interact with a simple slider-based GUI to produce an optimal mask for the given experiment, save the parameters, and pass them to
<a href="/assets/posts/2018-04-16-i-wrote-a-blob-tracker/batchThreshold.py">batchThreshold.py</a>
. These thresholding scripts depend on:
<ul>
<li>cv2</li>
<li>numpy</li>
<li>matplotlib</li>
<li>argparse</li>
<li>os</li>
<li>json</li>
<li>os</li>
<li>time</li>
<li>multiprocessing</li>
</ul>
<div>
<figure>
<img src="/assets/posts/2018-04-16-i-wrote-a-blob-tracker/hsvHist.png" alt="HSV histograms." width="85%" />
<figcaption>3-channel HSV histograms.</figcaption>
</figure>
</div>
<div>
<figure>
<img src="/assets/posts/2018-04-16-i-wrote-a-blob-tracker/masked.jpg" alt="Masked image." width="85%" />
<figcaption>Image with mask applied.</figcaption>
</figure>
</div>
</li>
<li>Finally, the masked images of the blobs are passed to
<a href="/assets/posts/2018-04-16-i-wrote-a-blob-tracker/maskTrack.py">maskTrack.py</a>
which utilizes <a href="https://soft-matter.github.io/trackpy/v0.3.2/#">TrackPy</a> (which has its own suite of dependencies including PyFFTW and Numba) to match and track blobs through subsequent frames and create blob trajectories that are my ultimate goal. This one depends on:
<ul>
<li>os</li>
<li>time</li>
<li>argparse</li>
<li>pandas</li>
<li>matplotlib</li>
<li>skimage</li>
<li>trackpy</li>
<li>pims</li>
</ul>
<div>
<figure>
<img src="/assets/posts/2018-04-16-i-wrote-a-blob-tracker/tracks.png" alt="Particle trajectories." width="85%" />
<figcaption>All tracked particle trajectories (left panel) and filtered / sufficiently long trajectories (right panel).</figcaption>
</figure>
</div>
</li>
<li>The script
<a href="/assets/posts/2018-04-16-i-wrote-a-blob-tracker/processingDriver.py">processingDriver.py</a>
glues it all together and creates a single interface to call each of the above in the correct order, allowing for a full experiment to be processed in a few short minutes. It depends on:
<ul>
<li>os</li>
<li>subprocess</li>
</ul>
</li>
</ul>
<hr />
<p>Each of these scripts took me a good while to develop and rely heavily on reading blog posts by Adrian Rosebrock at <a href="https://www.pyimagesearch.com/">PyImageSearch</a> and Satya Mallic at <a href="https://www.learnopencv.com/">LearnOpenCV</a> as well as the <a href="https://docs.opencv.org/3.1.0/index.html">OpenCV docs, examples, and tutorials</a>, plus countless other individual sites and posts.</p>
<p>I hope you find these scripts useful to adapt for your own use, or as a learning resource in your own R&D process. I will probably try to write a detailed breakdown of each one in future posts, but for now thought just getting this out here would be a good start. Also, I fully intend to post to GitHub and make available that way with more complete documentation to accompany these posts, but again what’s here is better than nothing!</p>Tucker SylviaOpen source software helps students and researchers solve unique problems and construct purpose-built solutions.Satellite Science FTW2018-01-30T15:28:00-05:002018-01-30T15:28:00-05:00http://tuckersylvia.com/2018/01/30/satellite-science-ftw<h3 id="in-the-modern-technological-era-dominated-by-big-data-and-the-surveilance-state-scientific-measurements-are-still-surprisingly-sparse-in-space-and-time">In the modern technological era dominated by big-data and the surveilance state, scientific measurements are still surprisingly sparse in space and time.</h3>
<div>
<figure>
<img src="/assets/posts/2018-01-30-satellite-science-ftw/imageyukonde.jpg" alt="Sentinel-2 image of Yukon River Delta" width="100%" />
<figcaption>Remote sensing and satellite observations are changing the landscape of modern science for the better.</figcaption>
</figure>
</div>
<hr />
<h3 id="sources-physorg-where-i-saw-it-and-the-original">Sources: <a href="https://phys.org/news/2018-01-image-yukon-delta.html">Phys.org (where I saw it)</a> and the <a href="http://www.esa.int/spaceinimages/Images/2018/01/Yukon_Delta">Original</a></h3>
<hr />
<h2 id="nice-view">Nice View!</h2>
<p>The above photo (well technically it’s probably a pseudo-color image) caught my eye as I was browsing through my news feed (I use <a href="https://feedly.com/">Feedly</a>). I have seen a lot of satellite imagery and am familiar with the interpretation and usefulness of this data. Sometimes these remarkable tools yield fields that are visually striking.</p>
<h3 id="first-a-geological-aside">First, A Geological Aside:</h3>
<p>The above image shows the Yukon River Delta. Deltas (named by Herodotus for their somewhat triangular form reminiscent of the Greek delta character, <script type="math/tex">\Delta</script>) are margins where rivers meet the sea (or another body of water) and deposit the last of the sediment load that they eroded and carried away from the highlands. The study of deltaic processes is a pretty neat niche within sedimentology, as they are deposited rapidly and short lived in geologic terms, and old deltas can have consequences for oil and gas exploration.</p>
<p>Here we see the main-drain of the Yukon territory and Arctic Northwest bifurcate into a complex network of meandering distributary channels. This delta has a classic lobate morphology and hummocky texture with a smooth shoreline. Here this means that there is an abundant supply of sediment being delivered enhancing the deltas construction. Simultaneously there are destructive process at work in the form of ice-shaped features and wave and tidal action. The construction/destruction relationship is how deltas are classified and compared. This system allows us to examine the relative contributions of changes in sea level, wave energy, tidal energy, sedimentation, flow intensity, seasonal melt, ice, etc. with time. The sea water near the terminus of the red and green sub-aerial parts of the delta is a light white/beige/gray in color because of sediment suspended in the water column that has not yet been deposited, usually because of small grain size. The primary mechanism of deposition for deltas is particles falling out of suspension in response to a reduction in flow velocity as the river becomes less confined.</p>
<h3 id="bigger-picture">Bigger Picture</h3>
<h4 id="that-might-be-a-bit-pun-ish">That might be a bit pun-ish…</h4>
<p>Data like this has been used to quantify changes in sediment transport and deposition rates. An increasing trend in sediment transport in the region has been interpreted to correlate with increased erosion associated with enhanced spring melt and thawing of permafrost - impacts of global climate change. This simple and beautiful image is one example of how remote sensing and satellite platforms continue to make significant measurements and draw impactful conclusions that would be impossible to measure with traditional field techniques.</p>
<p>I chose to write a little blurb about this photo because to me it showcases the importance of investing in scientific infrastructure. If the <a href="http://www.esa.int/">European Space Agency</a> had not secured the funding to launch the massive <a href="http://www.esa.int/Our_Activities/Observing_the_Earth/Copernicus/Overview4">Copernicus</a> missions we would lack a comprehensive data set and some exceptional work would never have been possible. Many scientific satellites in orbit measure all sorts of fields and properties of Earths complex and dynamic systems, allowing us to have much greater spacial and temporal coverage of some very remote and hard to sample places. Coverage is a fundamental challenge for most of the Earth sciences, especially when it comes to understanding vast remote high latitude regions and the expansive depths of the oceans.</p>
<p>A fundamental misunderstanding is that these very expensive and highly precise instruments are <em>just</em> taking pictures, but they do much more than that. For instance, the <a href="https://en.wikipedia.org/wiki/Sentinel-2">Sentinel-2</a> pair of satellites measure in 13 frequency bands spanning the visible and infrared parts of the EM spectrum. The main purpose of this mission is monitoring land use and vegetation changes, but obviously the data, which happens to be free and open, can be utilized by creative scientists to solve a much more expansive set of problems, as this image demonstrates.</p>
<p>Observations lay the foundation for improving our understanding of our environment, and it’s imperative that we continue to invest in basic research and observational infrastructure. In addition to the above Yukon photo, I have recently read two other articles and listened to a podcast all touching on this topic and inspiring me. One was another satellite story, and <a href="https://sattrackcam.blogspot.com/2017/12/where-to-hide-your-nuclear-missile.html">a very cool one at that</a>, using open data to explore the coverage of current operational missile detection satellites. <a href="https://www.nature.com/articles/d41586-017-08967-y">The other</a> points out that a pretty small initial investment in an integrated whole-Earth monitoring network could provide substantial scientific and societal benefits. Planet Money did a series of episodes on <a href="https://www.npr.org/sections/money/2017/12/01/567267573/planet-money-goes-to-space">launching their own cubesat</a>. The entrirety of our present knowledge of natural laws and modern technological advancement is rooted in the findings of generations of inquisitive minds who actively observed the world around them. Now is the time to give back to ourselves and the international scientific community by rejuvenating public interest in funding basic research infrastructure and highlighting the services and concrete outcomes we all benefit from.</p>
<p><em>EDIT 1/31/18 - Satellites are becoming an oddly common theme, <a href="https://skyriddles.wordpress.com/2018/01/21/nasas-long-dead-image-satellite-is-alive/">here is another interesting post</a> I just read about a guy who looks for satellites, and just found a <a href="https://www.nasa.gov/feature/goddard/2018/nasa-image-confirmed">lost NASA probe called IMAGE</a>. Pretty Neat.</em></p>Tucker SylviaIn the modern technological era dominated by big-data and the surveilance state, scientific measurements are still surprisingly sparse in space and time.Beagleprint Part 22018-01-17T14:50:00-05:002018-01-17T14:50:00-05:00http://tuckersylvia.com/2018/01/17/beagleprint-part-2<h3 id="because-one-must-have-network-control">Because one must have network control…</h3>
<p>Nearly a year later, here is the second post in the 3d printer - Beaglebone - Octoprint saga.</p>
<p>I hope to cover the software details here because most of the relevant hardware info is covered in <a href="/2017/02/22/beagleprint-part-1/">Part1</a>.</p>
<div>
<figure>
<img src="/assets/posts/2018-01-17-beagleprint-part-2/beagle-box.JPG" alt="Beaglebone and printer control box" width="85%" />
<figcaption>Beaglebone (in the altoids tin) with my switchable 4-port USB hub, serial cable, and the printer control box.</figcaption>
</figure>
</div>
<h3 id="non-printer-hardware">Non Printer Hardware</h3>
<p>The Duplicator i3 runs a Melzi board with a Repetier based firmware. Octoprint picks it up no problem over USB/TTY with a reasonably modern kernel.</p>
<p>I have had issues with USB connectivity though, and have tried different cables, long and short, and have had the best luck with one that has extra shielding. It sould be an issue with my cheap 4-port USB hub, but I’m not sure. If you suffer random serial dissconnects mid-print it’s likely a cable issue. The worst part about this is that the printer hangs after the last move / command received and does not cool down which can be a dangerous situation. I wish there was functionality within Octoprint to automagically try to reconnect to the printer and kill the hotend and bed after a catastrophic communication issue but am not sure this exists yet.</p>
<h3 id="beaglebone-black-prep">Beaglebone Black prep</h3>
<p>I am using a Beaglebone Black Rev A6. I jumped on the Beaglebone bandwagon early-on; I don’t care that the Raspberry Pi has a larger following. This thing cost me $ 55 in May 2013(14?) and still has the balls to run a full Debian system with some daemons on like 1 Amp including accessories. The A6 is a little crippled compared the newer <a href="https://www.adafruit.com/product/1996">C</a> and <a href="https://www.seeedstudio.com/SeeedStudio-BeagleBon">Green</a> revisions of the board, but half a gig of RAM and 2 gigs of eMMC still works for me. I boot and run it off the SD card slot anyway which I will get to. The Beaglebones go for a little more $$ now and still have 512 MB of RAM but increased to 4gb of eMMC with basically the same 1 GHz TI ARM cortex A7 processor (I think, don’t quote me).</p>
<p>Whatever the specs and cost, Beaglebone Blacks (and Greens) are a great RasPi alternative and have a little more capability as a Linux server. The Beaglebones are also lacking a bit in graphics capability compared to the Pi, but for a headless server that is not a factor. When I am running prints serving over the LAN or tunnelled in with VPN or SSH it uses around 3% CPU to control the print and maintain the server connection <em>including the webcam</em>. One resource intensive process is the rendering of the timelapse with <strong>FFMPEG</strong>, for which I think I have a work around for by compiling some libraries (libjpeg and libgphoto2, libavtools etc.) and not using the precompiled ARM binary .debs.</p>
<p>To get the Beaglebone Black (or Green or whatever other computer you’re using) ready you should at least get an up-to-date copy of whatever OS you want to run. For me this was a Jessie image from the eLinux Beaglebone builds page. They offer standalone and installable versions. I chose the 4 GB standalone version because I knew I would be running it off the SD slot indefinitely due to the small amount of onboard eMMC. If you want to flash it to the onboard eMMC you need to know the available capacity for your board revision so you can use one of the prefab images or load your root onto the onboard flash and everything else onto some other storage. This is <strong>not</strong> a guide in Linux how-to, but I will tell you how I did it. With the old revisions of the board it’s not practical to use one of the newer and larger prefab images to flash the onboard eMMC. It’s not hard to enable the flasher on a smaller /custom image, and there is documentation to do that. I used the newest Debian Jessie image and dd’d it to an sd card:</p>
<div class="language-terminal highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gp">$</span><span class="w"> </span><span class="nb">dd </span><span class="k">if</span><span class="o">=</span>beaglebone-image.iso <span class="nv">of</span><span class="o">=</span>/path/to/device
</code></pre></div></div>
<p>There are multiple ways to achieve this, like if you download a zipped version:</p>
<div class="language-terminal highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gp">$</span><span class="w"> </span>xzcat file-i-downloaded.xz | <span class="nb">dd</span> /path/to/device
</code></pre></div></div>
<p>Seems like a problem you can find the answer to if you are reading this. I also frequently use the convenient <a href="https://launchpad.net/usb-imagewriter"><strong>USB Image Writer</strong></a> included with Linux Mint (my distro of choice that runs my laptop, HTPC, and workstation).</p>
<p>After that boot and disable all the <del>BS</del> extra daemons you don’t need: Apache, NodeJS, any other services that are on by default you don’t want (including X and any login and display managers if you dont need graphics). Make sure to update everything:</p>
<div class="language-terminal highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gp">$</span><span class="w"> </span>apt update
<span class="gp">$</span><span class="w"> </span>apt upgrade
</code></pre></div></div>
<p>before moving on and installing Octoprint.</p>
<h3 id="octoprint-config">Octoprint config</h3>
<p>Once you get a reasonable server up the Octoprint setup is easy. I followed <a href="https://github.com/foosel/OctoPrint/wiki/Setup-on-a-Raspberry-Pi-running-Raspbian">this</a> guide with minimal modifications.</p>
<h4 id="webcam">Webcam</h4>
<p>Same as <a href="https://github.com/foosel/OctoPrint/wiki/Setup-on-a-Raspberry-Pi-running-Raspbian#webcam">here</a> following instructions for Jessie.</p>
<div>
<figure>
<img src="/assets/posts/2018-01-17-beagleprint-part-2/webcam.JPG" alt="Close up of build plate and webcam" width="85%" />
<figcaption>Close up of the build volume, plate, cooler, and webcam facing towrads us in the negative Y direction.</figcaption>
</figure>
</div>
<h4 id="other-plugins">Other plugins</h4>
<p>I added some useful plugins, there are a bunch of good ones available. Some I have added are: cost, curaengine, detailed progress, discovery, display progress, eeprom repetier editor, printer stats, filemanager, fullscreen webcam, printer stats, and terminal commands. Lots of great functionality for whatever you want or need.</p>
<h4 id="success">Success!</h4>
<p>Now you have a nice low powered ARM server (or whatever) connected to your 3d printer over USB serial and a webcam to watch the whole process. This is great because you can fully control the printer and check on prints from within your home network without resorting to the infamous <a href="https://en.wikipedia.org/wiki/Sneakernet"><em>sneakernet</em></a>.</p>
<h3 id="getting-in-from-the-internet">Getting in from the internet</h3>
<h4 id="ssh-tunneling-and-forwarding-with-a-hop">SSH tunneling and forwarding with a hop</h4>
<p>You have options here. I have a computer setup with an <strong>SSH</strong> server running on an obscure port that I forward through my main gateway router because I am a nerd and like to be able to SSH into a machine on my local network if I need to. The reason for not using port 22 is that instead of relying on fail2ban or iptables rules to mitigate the absurd amount of bad login attempts you end up with on 22 those banket attempts never even happen because most bots are not actively probing all ports on potential hosts.</p>
<p>With this setup I can forward the Octoprint webserver over a secure tunnel with a single hop from wherever. I like this because it’s free and not too complicated. The command I use looks something like this saved into script:</p>
<div class="language-terminal highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gp">$</span><span class="w"> </span>ssh <span class="nt">-p</span> PORT <span class="nt">-L</span> 9999:beaglebone.local:5000 <span class="nt">-N</span> HOMEIP
</code></pre></div></div>
<p>This forwards the Octoprint server running on 5000 (default) to local port 9999, on my laptop (haven’t tried it on my phone yet but idk if Apple lets you access locacl ports at all…) and works like a charm. Other solutions at <a href="https://superuser.com/questions/96489/an-ssh-tunnel-via-multiple-hops">this</a> Stack answer or google <strong>SSH tunnel hop</strong>.</p>
<h4 id="alternatives">Alternatives</h4>
<p>Another new solution in the form of an <a href="https://plugins.octoprint.org/plugins/astroprint/">Octoprint plugin</a> that interfaces with <a href="https://www.astroprint.com/products/p/astroprint-cloud">AstroPrint Cloud</a> seems intriguing, but I kind of like having control over my own stuff. A common theme for me is spending <em>lots</em> of time cobbling together a free solution to avoid messing with third party cloud services. AstroPi is another project that I investigated briefly at the beginning.</p>
<h3 id="the-end">The end</h3>
<p>That’s it for now, I hope you found this helpful. Check back for more on my 3d printing sagas and other home networking schtuff.</p>Tucker SylviaBecause one must have network control…Radius Volume Relationships of Spheres2017-03-23T04:50:00-04:002017-03-23T04:50:00-04:00http://tuckersylvia.com/2017/03/23/radius-volume<h3 id="reminder-you-should-still-be-learning-something-new-everyday">Reminder: You should still be learning something new everyday!</h3>
<p>So I had a fairly simple question while writing my thesis the other day. I was writing about Stokes rise and buoyant settling / ascent of spherical bodies in a viscous fluid. I wanted to know what the change in terminal velocity was if you doubled the spheres volume keeping all other parameters (density contrast, ambient viscosity) constant. Below I will show you the simple relationships and assumptions I used to get an estimate for this quantity. Along the way we will touch on what I think has been the most valuable skill I have learned in my time at graduate school: <strong>estimation and back of the envelope calculations</strong>.</p>
<p>If you can reasonably estimate characteristic properties of whatever problem or system is concerning you then you can say a lot about it in a very robust way. Sure, you may not encompass all the details at all scales but you will definitely be able to make reasonable estimates and assertions. Sometimes you will hear this practice referred to as <em>scaling arguments, dimensional analysis, or nondimensionalization</em>. This is actually what differentiates most disciplines concerned with studying dynamical systems: the assumptions we make and scalings we choose define the simplifications we can make to our equations, which for most practical purposes start out the same (or are at least constructed from the same building blocks).</p>
<p>My masters research has focused on subduction zone geodynamics, specifically physical-tectonic-fluid dynamic analogue laboratory modeling of the viscously creeping <a href="https://en.wikipedia.org/wiki/Mantle_wedge">mantle wedge</a>. This project has led me down quite a few rabbit holes over the past three years and I have learned a lot of little tidbits along the way from a range of seemingly disparate fields: geology and geophysics, numerical methods and computational simulation, image processing and computer vision, electrical and mechanical engineering, differential equations and linear algebra, computer science and programming, and more. I wish I could fit all of it onto my CV but it would be <em>wicked</em> long.</p>
<p>(For the uninitiated, <em>wicked</em> is a term of endearment where I’m from in Rhode Island, the smallest of the United States. The best things in life are <em>wicked awesome</em>, like <a href="https://en.wikipedia.org/wiki/Del's">Del’s</a> frozen lemonade and coffee <a href="https://www.google.com/search?q=awful+awful&oq=awful+a&aqs=chrome.0.0j69i57j0l4.1897j0j7&sourceid=chrome&ie=UTF-8">Awful Awful’s</a>.)</p>
<h3 id="volume-of-a-sphere">Volume of a sphere:</h3>
<p>Back to our scheduled programming, I needed to take a break from writing and figure out this problem.
The volume of a sphere is defined as:</p>
<script type="math/tex; mode=display">V = \frac{4}{3} \pi r^3</script>
<p>We can rearrange this to calculate the radius from a known volume:</p>
<script type="math/tex; mode=display">r = \sqrt[3]{\frac{3}{4\pi} V}</script>
<p>Using a unit volume of Unity and doubling it we can calculate two <em>dimensionless</em> (kinda) radii to use in calculating a scaling factor:</p>
<script type="math/tex; mode=display">r_1 = \sqrt[3]{\frac{3}{4\pi} (1)} = \frac{\sqrt[3]{\frac{3}{\pi}}}{2^{2/3}} \approx 0.620</script>
<script type="math/tex; mode=display">r_2 = \sqrt[3]{\frac{3}{4\pi} (2)} = \sqrt[3]{\frac{3}{2\pi}} \approx 0.782</script>
<p>Taking the ratio of those two yields a nice scaling factor:</p>
<script type="math/tex; mode=display">\frac{\sqrt[3]{\frac{3}{2\pi}}} {\frac{\sqrt[3]{\frac{3}{\pi}}}{2^{2/3}}} = \sqrt[3]{2} \approx 1.26</script>
<p>Or in other words, a ~26% increase in radius for a doubling in volume.</p>
<p>This is a nice relationship to know in general for the future. It allows us to guestimate a radius from a volume <em>change</em> and not have to explicitly calculate it from known volume.</p>
<p>Another, probably more refined approach in contrast with the <em>plug-and-chug</em> method used above would be to rearrange the volume equation as we did above to get a function for radius r as a function of V, r(V). Then just take the derivative to get r(V)’ or dr/dV, the change in radius with respect to a change in volume. This would give us a more generic function to explore other specific volume changes besides doubling. I should probably do that and plot it below…<strong>TODO</strong></p>
<h3 id="stokes-law">Stokes Law:</h3>
<p>Now that we know how much the radius will change for a doubling of volume we need to know what affect that will have on the Stokes terminal velocity.</p>
<p>Stokes law for the terminal velocity of a spherical body sinking or falling through a fluid is defined by the balance of the gravitational forces with the buoyancy and drag forces: <strong>Fg = Fb + Fd</strong>. A few nice breakdown articles can be found at the <a href="http://scienceworld.wolfram.com/physics/StokesVelocity.html">World of Physics</a> and <a href="https://en.wikipedia.org/wiki/Stokes%27_law#Terminal_velocity_of_sphere_falling_in_a_fluid">WikiPedia</a>. The end result after some rearranging for terminal velocity is:</p>
<script type="math/tex; mode=display">v_t = \frac{2}{9} \frac{g(\rho^{'} - \rho) r^2}{\eta}</script>
<p>If we assume that the density contrast and matrix viscosity remain constant we can use the same <em>plug and chug</em> approach as before substituting in Unity and 1.26 for values of the radius.</p>
<script type="math/tex; mode=display">C = \frac{2}{9} \frac{g(\rho^{'} - \rho)}{\eta} = constant</script>
<script type="math/tex; mode=display">v_1 = C (1)^2 = C</script>
<script type="math/tex; mode=display">v_{1.26} = C (1.26)^2 \approx 1.59 C</script>
<p>This shows that there is a ~59% increase in the terminal Stokes velocity for a sphere with double the volume in the same fluid environment. Pretty neat.</p>
<h3 id="closing-remarks">Closing remarks:</h3>
<p>Well, there was not really any traditional scaling analysis here but you can hopefully still see how <em>nondimensionalization</em> (in my very broad sense) can be utilized to gain insight into all kinds of problems.</p>
<p>We have also learned that if you let two balloons go at the same time and one contains twice the volume of helium it will ascend 60% faster than the smaller one when they reach their respective terminal velocities. The larger balloon would also accelerate faster and get to terminal velocity more quickly too, but that is not part of this story… yet. If you get in a balloon race any time soon you now know how to win.</p>Tucker SylviaReminder: You should still be learning something new everyday!Etymology of Suffixes2017-03-02T03:50:00-05:002017-03-02T03:50:00-05:00http://tuckersylvia.com/2017/03/02/suffixes-and-etymology<h3 id="learn-something-new-everyday">Learn Something New Everyday!</h3>
<p>So my little tidbit for today will be about a random piece of knowledge I picked up while doing some more thesis work.</p>
<p>I am technically a geologist by training, and for some reason the etymology of the word geology has always been explained to me in one way that until now I never questioned: it is the combination of <em>geo</em> and <em>logos</em>. Now this seemed all well and good to me because those old greek words have relatively solid definitions (although my perspective on solids and fluids may be a bit skewed, but we will save that for another day). To me <em>Geo-</em> means of the earth, and <em>-logos</em> or <em>-logia</em> means study of.</p>
<p>Here is where the headache happened - for some reason in my mind I had it that <em>-logy</em> was somehow more related to the “hard sciences” and other suffixes like <em>-graphy</em> and <em>-onomy</em> could be reserved for what some may deem less rigorous fields. But that doesn’t make any sense at all. Math<em>ematics</em>, phys<em>ics</em>, chem<em>istry</em>, bio<em>logy</em>, econ<em>omics</em>, psycho<em>logy</em>, psych<em>iatry</em>, ge<em>nomics</em>, geo<em>graphy</em>, topo<em>logy</em>, topo<em>graphy</em>, astro<em>nomy</em>, astro<em>logy</em>, heck there could feasibly be an astro<em>graphy</em>. This is a futile attempt at conveying the endlessness of this list and the subtle semantic differences brought on by the changing suffixes.</p>
<p>Luckily I was not the first person to ask this question (I usually never am), and I found a few explanations. <a href="https://www.google.com/search?rlz=1CDGOYI_enUS710US710&hl=en-US&ei=jZanWIfTL-ut0gLJw5yQBg&q=ology+and+graphy&oq=logy+vs+gra&gs_l=mobile-gws-serp.1.1.0i13i30k1j0i22i10i30k1j0i13i30k1j0i13i5i30k1j0i13i30k1.1792.14050.0.15963.16.16.0.1.1.0.198.2490.0j15.15.0....0...1c.1.64.mobile-gws-serp..1.12.1827...0j0i67k1j0i10k1j0i20k1j35i39k1j0i131k1j0i13k1j0i13i10k1.K15DLn0_hl4#hl=en-US&q=ology+ography+onomy+ometry+omics&*">What I typed into Google</a>, and the first result, from <a href="http://english.stackexchange.com/questions/116456/meaning-of-onomy-ology-and-ography">Stack English</a>.</p>
<p>The general gist seems to be that <a href="https://en.wikipedia.org/wiki/-logy"><em>logy</em></a> is reserved for learning / explaining / studying a given subject.</p>
<p><a href="https://en.wiktionary.org/wiki/-nomy#English"><em>nomy</em></a> is supposedly related to rules, laws, or customs. I don’t really get this one because it could be seen as painting with a fairly broad bruch (eco<em>nomy</em>, taxo<em>nomy</em>, astro<em>nomy</em>, etc.). Additionally it is not all too clear how <em>nomy</em> relates to its cousin <em>nomics</em>, because for some fields it is an easy bridge (eco<em>nomics</em> is the <del>study</del> laws, rules, and customs of the eco<em>nomy</em>), but I am unaware of any taxo<em>nomics</em> (taxo<em>nomy</em> is basically the naming and classification of species, not to be confused with taxidermy) or geno<em>nomy</em> (ge<em>nomics</em> is the study and quantification of the genome, or the entire genetic makeup of a population/species).</p>
<p><a href="https://en.wikipedia.org/wiki/-graphy"><em>graphy</em></a> apparently generally refers to writing about a field of study, but geo<em>graphy</em> is definitely not writing about geo<em>logy</em>, these are fundamentally two different subjects. Stegono<em>graphy</em> is also definitley not writing about covering things up, but rather the art of hiding secret information in plain sight or using a non secret carrier for your encrypted message (like hiding exploit code in a JPEG for example…).</p>
<p><a href="https://en.wiktionary.org/wiki/-metry"><em>metry</em></a> seems relatively straight forward, relating to measurement, but still geo<em>metry</em> is not the study of measuring the earth.</p>
<p>I am now going to try to “conjugate” <em>geo-</em> with as many suffixes as I can and see how many of them make sense or are correct.</p>
<ul>
<li>Geology - study of the earth</li>
<li>Geography - study of places on earth</li>
<li>Geometry - study of shapes</li>
<li>Geonomy -naming places on earth?</li>
<li>Geonomics - study of naming places on earth?</li>
<li>Geonometrics - measuring the study of the naming of places on earth?</li>
<li>Geometric - earth measure?</li>
<li>Geometrics - study of earth measures?</li>
</ul>
<p>Shit is clearly broken.</p>
<p>There are also a whole suite of subdisciplinies, even within the earth sciences, that do not adhere to the guidelines, for example lithostratigraphy, oceanography, oceanology, hydrography, hydrology, etc. I am not sure if I am any less confused, but there do seem to be some loose definitions as to which suffix properly belongs to each discipline. The problem now appears to be that people have latched on to the improper suffix for many disciplines and we just have to roll with it, <em>e.g.</em> astr<em>onomy</em> and astr<em>ology</em> are seemingly reversed. Medical subdisciplines also seem to have some variance from the “right” way. Whatever, I have now have wasted at least half an hour reading and writing about this so I will surely never forget…</p>
<p>Cheers,</p>
<p>-Tucker</p>Tucker SylviaLearn Something New Everyday!Beagleprint Part 12017-02-22T19:50:00-05:002017-02-22T19:50:00-05:00http://tuckersylvia.com/2017/02/22/beagleprint-part-1<h3 id="because-hacking-is-fun-and-i-will-look-for-any-excuse-to-avoid-my-thesis-during-a-snow-storm">Because hacking is fun, and I will look for any excuse to avoid my thesis during a snow storm</h3>
<h4 id="how-to-bend-modern-technology-to-perform-exceptionally"><em>How to bend modern technology to perform exceptionally.</em></h4>
<h3 id="preface">Preface</h3>
<p>Well, here will reside my first attempt at a bonafide blog post.</p>
<p>I have decided that the main purpose of this site will be to document little side projects I complete and problems I manage to solve at home, work, hobbying, and in general.</p>
<p>Many of the posts will probably come out merely as a culmination of links and research that I have done and mashed toegether. For me that provides a convenient place to go back and find resources without digging through my gargantuan pile of semi-organized bookmarks.</p>
<p>I will almost certainly not implement commenting because I do not write this to solicit an opinion. I also do not claim in any way to be an authority on any topic(s); In fact <strong>I am purely a hoobyist who utilizes these freely available tools and resources only to further my own knowledge. If you do find my writing useful please cite me appropriately.</strong></p>
<h3 id="getting-to-it">Getting to it</h3>
<p>So, without further ado, I will now present you with a description and guide as to how I set up my new 3D printer and an Octorpint server running on a Beaglebone Black to control it over the network. This is probably going to turn out more lengthy than I intend so I will break it up as is commensurate with the content and my time.</p>
<h3 id="3d-printer-choice-and-initial-setup--configuration">3D Printer Choice and Initial Setup / Configuration:</h3>
<p>I really wanted a 3D printer for Christmas, and thanks to the Amazon gift card gods and many generous <del>donations</del> gifts from family and friends I was able to purchase a <a href="http://a.co/bJzHlRA">Monoprice Maker Select v2</a> (I think, version numbers are sketchy when dealing with rebranded Chinese stuff. This is a Monoprice imported and rebranded verion of the Wanhao Duplicator i3). I decided on this model because it has all the relevant features, a large community for support and advice, and an unbeatable price at <strong>$330</strong>.</p>
<p>During my primary research phase I did a lot of reading (as one does). Before taking the plunge I came to the conclusion that although this model requires some tinkering to get it to print optimally, that is infact exactly the amount of tinkering I was looking for in the hobby. I like to solve problems, but nobody wants a headache. A headache is what you may get with some of the other, cheaper kits out there. This printer comes basically assembled and ready to print out of the box. This serves to get you going and then as you gain experience little issues become apparent and give you managable things to tweak. This produces more of a ramp than a wall in terms of learning “curve”.</p>
<p>There <strong>were</strong> some other candidates: the Monoprice Maker Select Mini, genuine Prusa i3, factory Wanhao Duplicator i3 Plus (refresh of this model), basically anything in the $250-$500 range that had a Cartesian geometry and was not a total turd. The deciding factor for me really was the community and documentation. There are alot of i3 clones out there, but I think that the information is so readily available for this specific printer that it just makes sense, there are probably 10,000 of these things in active use.</p>
<p><em>Disclaimer I am a self-proclaimed <a href="https://www.gnu.org/philosophy/floss-and-foss.en.html">FLOSS</a> advocate, user, and supporter</em></p>
<p>I opted for the v2 over the Plus because I liked the idea of an external control box. If the power supply does ever go up in flames it is not directly under the bed like the newer model. Additionally there is plenty more room for mods: I plan to add some relays to control lighting and power via the Beaglebone, and eventually mount the Beaglebone itself into the control box too (right now it is mounted on the underside of the table the printer sits on in my boiler room). Finaly, the actual printer hardware specs are basically identical between revisions and I couldn’t justify an extra ~$100 just for the relocated controls when I could allocate those resources to buy a few spools of filament or spend it on some of the mods required by both versions.</p>
<h4 id="filament">Filament</h4>
<p>Speaking of <strong>filament</strong>, I bought and will continue to buy mine from <a href="https://www.makergeeks.com/"><strong>MakerGeeks</strong></a>. They make it themselves in the <strong>USA</strong> to a very high standard, and have fantastic prices. I opted to get the <a href="https://www.makergeeks.com/products/maker-filament-grab-bag-2kg-44lbs">limited time grab bag</a> for <strong>$60</strong>, which is a fscking awesome deal. I got two spools of ABS and two PLA, you don’t get to choose color but it’s not like I have any specific projects or anything, and you can paint it easily. I ended up with red and black PLA and yellow and green ABS.</p>
<p>When the magic man in the brown truck delivers your printer you open the box (right side up… trust me) and the unit comes in three pieces that are fully wired together. This means you have to take some care to get it all unpacked and oriented correctly. Simple instructions (could be a little more descriptive) show you where to insert something like four screws and then you’re off. Pry the factory test print off the bed, level the bed, and print away with one of the four models that come preloaded on the supplied SD card (which is not total and utter crap surprisingly). The files on my card were numbered, but there’s a flat butterfly, a baby elephant, a cushy chair, and a swan, <em>I think in that order, your milage may vary</em>. There is also a PDF manual and an old 15.x EXE version of Cura with the Wanhao profile already setup on the card. I did not use the supplied slicing software because I run Linux on everything. I found a downloadable INI with the stock Wanhao Cura settings somewhere but I ended up creating my own profile from scratch after some trial and error.</p>
<h4 id="firmware-to-be-continued">Firmware (to be continued)</h4>
<p>My main issue turned out to be that the acceleration and jerk are tuned way too high in the factory firmware settings in an attempt to show off how fast this machine can print, to the detriment of print quality. The test prints all print well and at decent speed with the supplied profile but I was less successful when trying to print my own designs or things I downloaded from Thingiverse and elsewhere.</p>
<p>After you burn through the included “10 m” (more like 10 ft) of filament you will inevitably have some work to do. This is where all that great community documentation comes in. I have found <a href="http://3dprinterwiki.info/wiki/wanhao-duplicator-i3/">this</a> to be the best starting point for this printer. You will most certainly want to print up some parts to beef up the rigidity of the frame. I printed the <a href="https://www.thingiverse.com/thing:921948">Z braces</a> in green ABS and black PLA.</p>
<h4 id="tools">Tools</h4>
<div>
<figure>
<img src="/assets/posts/2017-02-22-beagleprint-part-1/toolbox.JPG" alt="3d printing toolbox" width="100%" />
<figcaption>Cheap organizer from Harbor Freight with all my 3d printing tools and spare parts</figcaption>
</figure>
</div>
<p>You will want some extra tools too, namely a small keychain level to check all your guide rods and frame alignment (and the table the printer is on… my table is horribly off kilter and I had to compensate with some cedar shingle wedges on the two front feet), Xacto knife, CA glue, zip ties, acetone, rubbing alcohol, and certainly some metric hex drivers to supplement those included. I have found that the knockoff BuildTak build surface is sticky enough for both PLA and ABS as long as you wipe it with isopropyl first to remove any grease or residue.</p>
<h3 id="little-issues">Little Issues</h3>
<h4 id="stock-cooler">Stock Cooler</h4>
<p>The stock fan shroud does pretty when you’re first getting started. I have printed the <a href="http://www.thingiverse.com/thing:1025471">DiiiCooler</a> though and holy mackeral has it helped with many issues. Initially I just removed the left screws from the stock fan shroud and cocked the fan to point more directly at the print head as suggested elsewhere. This helped and was better than the standard configuration but a radial cooler like the Diii is better still than that. The only downside that I have found using the Diii cooler is that it occludes your view of the print head which makes seeing issues from the webcam and correcting them ASAP a little more challenging, but those issues are fewer with the new cooler anyway.</p>
<h4 id="bed-surface">Bed Surface</h4>
<p>I later found out that prints can adhere <em>too</em> well to the bed, especially ABS at high temperatures, and they may or may not bond together completely. This will result in you having to tear off all that build surface and get the adhesive underneath off too… and this task my friends is a 100% guaranteed total pain in the ass or your money back. I would <em>suggest</em> that if this happens to you, or you have to replace the build surface for any reason, that after you get all of the old build surface and glue off of the aluminum bed <strong>now is probably the best time to order yourself a borosilicate bed and silicone thermal pad because you certainly do not want to struggle with that hell goop ever again.</strong></p>
<p>For anyone looking to remove that helacious goop from their aluminum bed, I tried isopropyl, acetone, and Goo Gone with varying degrees of success. Goo Gone with the bed turned to ~68°C worked best. If you get the bed too hot the glue actually bonds better and becomes even more difficult to remove (I mean, it is designed to stay stuck while you are printing so not too big of a surprise). Don’t use any metallic scrapers (like the one supplied with the kit that you already tried) because it will gouge the crap out of the aluminum bed (steel > aluminum, thanks <a href="https://en.wikipedia.org/wiki/Mohs_scale_of_mineral_hardness">Mohs</a>). I used an old gift card to scrape / roll / drag the glue off after letting the Goo Gone set for a few minutes while the bed heated. It took a while and was frustrating but it does all eventually come off.</p>
<h4 id="y-axis">Y Axis</h4>
<p>One of the first “mods” I did was reinforcing the Y axis idler pulley. To accomplish this I just removed the screw holding the front pulley after detensioning the Y axis belt from underneath. Next you take off the nylock nut they use as a spacer. I put a washer on each side of the pulley when reinstalling the screw. You want to thread it in all the way to the frame but leave some space between the head of the screw and the washer, enough to fit a zip tie. Finally you put a zip tie around the screw head and the front of the frame with enough tension to keep the pully from getting torqued and bent out of alignment. There are some nice designs you can print for this purpose and I probably will do that eventually, but for now this seems to be a good hack. The movement of the Y axis is by far the sloppiest on my printer, I am pretty sure at least one of the linear bearings was toast right out of the box as it makes a rough noise, but it works OK. I have also taken to wipinig the guide rods with 3-in-1 oil every once in a while to help remove dust and improve smoothness.</p>
<h4 id="spool-holder">Spool Holder</h4>
<p>Most of the pictures you see online have the spool holder mounted atop the X/Z Gantry. This is not the place to put it. If you have any wobble in the frame it will only be accentuated by putting a kilo of plastic way up there and moving the center of mass so high. Mount the spool holder on the control box and use something to guide it to the extruder smoothly (I obviously used more zip ties). Also print a new spool holder (<a href="http://www.thingiverse.com/thing:1889438">I used this one</a>) when you get a chance if the stock one does not fit well, having a smooth rolling action helps make the extruder fight less and makes everything run smoother.</p>
<div>
<figure>
<img src="/assets/posts/2017-02-22-beagleprint-part-1/printer-overview.JPG" alt="3d printer overview" width="100%" />
<figcaption>My current setup. You can see all the mods discussed above, as well as all the accessories.</figcaption>
</figure>
</div>
<h3 id="its-the-end">It’s The End</h3>
<p>I am fairly certain that was most of the hardware stuff I had to get out there. There will be some remarks on the placement and setup of the ARM server in the next post but most of that depends on personnal setup and goals. See y’all soon. Maybe?</p>
<h2 id="stay-tuned-for-part-2-octoprint-configuration">Stay Tuned for <a href="/2018/01/17/beagleprint-part-2/">Part 2: Octoprint Configuration</a></h2>
<p><em>these pages are (for now) in a constant state of adjustment while I get used to this platform, so if something seems different or rearranged, it probably is.</em></p>Tucker SylviaBecause hacking is fun, and I will look for any excuse to avoid my thesis during a snow storm