Thursday, November 16, 2017

Glenwood Dunes Trail System Hike

July 2018 Update: The quality of the video shot last November was so poor, I re-shot the hike on Memorial Day, 2018. The video below is from 2018.

Earlier this month, I filmed the Glenwood Dunes hike, a hike I first did last year. the video is finished, and now up on YouTube. Feel free to have a look:





This takes place in the fall, so the colors were out and very bright. I used a different camera for this hike, and I wasn't very impressed with the quality. I'll likely go back to my other camera for the next in the series.

Tuesday, November 14, 2017

Web Scraping 2: Some Intermediate Functionality

IMPROVEMENTS AND CHANGING BUSINESS REQUIREMENTS


In my previous post on web scraping, I described a very basic script to cull data from the Internet. That was a pretty good first attempt, and the simplicity of the web application, as well as the business requirements didn't require a great deal of coding. The follow-up however, required some additional work as the previous script would not handle many of the new issues that arose. In this script, we are working with data from the Apache County, Colorado Assessor's Office. While the web application is the same, there are some changes that require some additional work. Specifically, we address the following changes:

  • We need a better way to wait for elements to become available on the webpage. Adding time.sleep() statements is fine, but we can't always be sure that the number of seconds we specify will be sufficient, and the more time we specify, the longer the script will require to run, as time.sleep() is a simple wait command. We need something that waits for the page to load completely, and if that happens in .75 seconds, then processing should continue after .75 seconds. If it takes too long, a timeout should interrupt the script.
  • We need to add some logic to identify records that we do not want. For example, if there certain records that we are not interested in, we should break out of the processing for that record immediately, and conintue to the next. This reduces the file size, but more importantly, it reduces the run time of the script, and reduces the code path (thus eliminating the potential for errors to arise, and cause an unexpected failure).
  • We need a better XPath tool. After a recent update, the XPath generator tool we used in the last article stopped working.
  • We need a way to handle the possibility of multiple items in a list that might be returned from a single search.
  • The data we need might be split among multiple screens. We should be able to move between screens to capture all of the data we want.
  • Some of the data we need may be in a frame. We need a way to be able to select the frame which contains the data we are looking for.
  • We need a way to access objects which may not be visible on the screen.
  • Sometimes we hit a parcel ID that is not (no longer?) in the system. This produces an error that will stop the scraper and throw an exception. We need to gracefully handle 'parcel not found' errors.
  • Along with the previous item, it would be useful to log what takes place with each record from the original file. Adding a log file would allow us to record successes and failures, and the reason a given parcel could not be retrieved.

UPDATED CODE


The following code addresses each of the issues highlighted above. As with the previous post, we'll make notes in code, then explain in more detail below.

# 1. Imports
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

from selenium.common.exceptions import NoSuchElementException
import time
import re
import csv

# 2. Create a webdriver object.
driver = webdriver.Firefox()

# 3. Setup Input and Output files
# Input file format is: AcctNo,ParcelNo.
with open('allkeys.csv', 'rb') as f:
    reader = csv.reader(f)
    parcelList = list(reader)
totItems=len(parcelList) # get the count of total items for status.
# 4. Change the mode of the output file from write to append.
outfile = open('datafile.csv','a')
outfile.write('"Account No.","Parcel No.","Legal Class","Unit of Meas.","Parcel Sz.","Short Owner Name","Address 1","City","State","Zip Code"\n')

logfile = open('ApacheCoScraper.log','a')

# 5. A counter is used to tell us how far along we are. It's used below.
iCntr = 0

# 6. Read a row from the input file - this contains 2 fields now.
for row in parcelList:
    # 7. Load the County page and get past the splash page
    driver.get("http://www.co.apache.az.us/eagleassessor/")
    time.sleep(3)
    driver.switch_to_frame(driver.find_element_by_tag_name("iframe"))
    # 8. Scroll up to see access the submit button.
    submitElement = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.NAME, 'submit')))
    driver.execute_script("arguments[0].scrollIntoView(false);", submitElement)
    submitElement.click()
    # 9. Wait for the page to load (applies to remainder of script)
    driver.implicitly_wait(15) # seconds


    # 10. Search for a parcel number, and bring up the Account Summary page
    accountNumber,parcelNumber = row
    parcelbox = WebDriverWait(driver, 20).until(EC.presence_of_element_located((By.NAME, 'ParcelNumberID')))
    parcelbox.send_keys(parcelNumber)
    parcelbox.submit()




    # 11. Try to find a warning message (indicates 'parcel not found').
    try:
        warningMsg = driver.find_element_by_class_name('warning')
    except NoSuchElementException:
        pass
    else:
        iCntr += 1
        print "%s (%d/%d) not found." % (parcelNumber, iCntr, totItems)
        logfile.write(parcelNumber + '(' + str(iCntr) + '/' + str(totItems) + ') not found.\n')
        continue


    # 12. Multiple entries may appear, select the one we need.
    validRowLink = driver.find_element_by_link_text(accountNumber)
    try:
        validRowLink.click()
    except:
        driver.find_element_by_class_name("clickable").click()
   
    # 13. Assume we only want records with a type of "02.R (land only). Ignore anything that is not 02.R
    raw_Legal_Class = driver.find_element_by_xpath("//*[@id='middle']/table/tbody/tr[2]/td[3]/table[3]/tbody/tr[2]/td[1]")
    legalClass = raw_Legal_Class.text

    if not legalClass == '02.R':
        iCntr += 1
        print "%s (%d/%d) skipped." % (parcelNumber, iCntr, totItems)

        logfile.write(parcelNumber + '(' + str(iCntr) + '/' + str(totItems) + ') skipped.\n')
        continue

    # 14. Capture Account Summary page data (the first page of data)
    raw_Account_Number = driver.find_element_by_xpath("//*[@id='middle']/h1[1]")
    raw_Parcel_Number = driver.find_element_by_xpath("//*[@id='middle']/table/tbody/tr[2]/td[1]/table/tbody/tr[1]/td[1]")
    raw_Tax_Area = driver.find_element_by_xpath("//*[@id='middle']/table/tbody/tr[2]/td[1]/table/tbody/tr[2]/td[1]")
    # 15. Capture the data items as python varaibles before moving to the next page.
    accountNumber = re.sub('Account:', '', raw_Account_Number.text).strip()
    actualParcelNumber = re.sub('Parcel Number', '', raw_Parcel_Number.text).strip() # actualParcelNumber has '-' marks in it.
    taxArea = re.sub('Tax Area', '', raw_Tax_Area.text).strip()

    # 16. Jump to the Parcel Detail tab
    PDPageLink = driver.find_element_by_link_text('Parcel Detail')
    PDPageLink.click()
    # 17. Obtain page data as webdriver objects
    raw_Unit_of_Measure = driver.find_element_by_xpath("//*[@id='middle']/div/span[6]")
    raw_Parcel_Size = driver.find_element_by_xpath("//*[@id='middle']/div/span[8]")
    # 18. Extract data from the webdriver objects, and place in python variables.
    unitOfMeasure = raw_Unit_of_Measure.text.strip()
    parcelSize = raw_Parcel_Size.text.strip()

    # 19. Jump to Owner Information tab
    OIPageLink = driver.find_element_by_link_text('Owner Information')
    OIPageLink.click()
    # 20. Capture Owner Information page data as webdriver objects
    raw_Owner_Short_Name = driver.find_element_by_xpath("//*[@id='middle']/div/span[2]")
    raw_Address1 = driver.find_element_by_xpath("//*[@id='middle']/div/span[6]/table/tbody/tr[1]/td/span[2]")
    raw_City = driver.find_element_by_xpath("//*[@id='middle']/div/span[6]/table/tbody/tr[3]/td[1]/span[2]")
    raw_State = driver.find_element_by_xpath("//*[@id='middle']/div/span[6]/table/tbody/tr[3]/td[2]/span[2]")
    raw_Zip = driver.find_element_by_xpath("//*[@id='middle']/div/span[6]/table/tbody/tr[3]/td[3]/span[2]")
    # 21. Extract data from webdriver objects, and place into python variables.
    ownerShortName = raw_Owner_Short_Name.text.strip()
    address1 = raw_Address1.text.strip()
    city = raw_City.text.strip()
    state = raw_State.text.strip()
    zipCode = raw_Zip.text.strip()

    # 22. Print the data items to our output file.
    stringData = '"' + accountNumber + '","' + parcelNumber + '","' + actualParcelNumber + '","' + taxArea + '","' + legalClass + '","' + unitOfMeasure + '",' + parcelSize + ',"' + ownerShortName + '","' + address1 + '","' + city + '","' + state + '","' + zipCode + '"\n'
    # print stringData
    outfile.write(stringData)
    # 23. Print a status message to the user.
    iCntr += 1
    print "%s (%d/%d) captured." % (parcelNumber, iCntr, totItems)
    logfile.write(parcelNumber + '(' + str(iCntr) + '/' + str(totItems) + ') captured.\n')   
    # 24. Back to main search page
    driver.find_element_by_link_text('Account Search').click()

# 25. Cleanup
outfile.close

logfile.close

CODE WALK-THROUGH


1. Imports
These are the same as in the previous post, with the exception of importing the NoSuchElementException class. This class is leveraged down in step 11 to determine if our search for a parcel ID returned no results.

2. Create the webdriver object
Again, this is the same as the previous post. We interact with the webdriver object to do things, and capture data.

3. Setup Input and Output files
This is also the same as the previous post. We need to specify our input and output files (read parcel numbers from input, write web data to output).

4. Change the mode of the output file from write to append.
This is also the same with one exception. Instead of opening our file as "write" (w), we open as "append". This way, if we run into an error, we can restart the script, and it will append to the existing data file (the "write" mode overwrites all data in the file each time the file is opened - not what we want).

5. We create a simple counter variable that increments with each record that we process.

6. As with the previous post, we loop through each row in the input file to do some set of tasks.

7. As before, we load the county page. We set a time.sleep() here to ensure the page loads, but this is the last time we will used a time.sleep().

8. Make an element visible
This was a new problem that popped up with the Apache County page. The content on the page pushed the submit button down below the bottom of the window such that it was not visible. If an element is not visible, selenium can't work with it. Imagine trying to click a button that is not visible on the page - you can't do it, your only option is to scroll down to the button, then click it. We do the same here. The 'false' parameter to the scrollIntoView() function tells Firefox to scroll down only until the entire object is visible, then stop (as opposed to placing the submit button in the middle or top of the screen).

9. Wait for the page to load
The driver.implicitly_wait() function solves a very big problem for us: 'how to ensure we don't try to start reading data items before the page is fully loaded, yet not wait indefinitely?'. The driver.implicitly_wait(x) function will wait 'x' second for the page to load completely, then allow the script to continue to the next statement. If x seconds has passed, and the page still has not loaded, a timeout will occur, and the script will throw an exception. This wait applies to the remainder of the script (every new page selected has 'x' seconds to completely load or risk a timeout), so we no longer require time.wait() function calls.

10. Search for a parcel number, and bring up the Account Summary page
Something here has changed since the previous blog post. Our input file no longer contains *only* parcel IDs. It now contains Account Numbers, and Parcel Numbers. We leverage some python magic to grab both numbers for the current record. We'll see how the two are used further down. We tell selenium to wait for the parcel box item to become visible on the page, then we fill it with the parcelNumber we obtained from the input file, and click the submit button to submit the query to the web server.

11. Check for 'parcel not found' error
In testing, a string of text of class 'warning' will be displayed in the search results error if the parcel ID being sought was not found. By searching for that error, we can handle the exception by writing a message to the screen and new log file, then continuing on to the next line in the input data. If the exception is raised (the error was *not* found), then we simply pass to exit the try clause, and continue with trying to capture the data.

12. Handle multiple results
For this dataset, there may be multiple rows with the same parcelNumber, but each will have a different account number. This is where we leverage the account number (the first field) from the input file. In the results, we search for one containing the account number we were provided. If we get a hit, we click that row. If that fails (an exception would be thrown), we just grab the first row of class type 'clickable'.

13. Filter certain records
At this point, we have enough information visible on the screen to determine whether or not we want this record. Since the business rule I was given states. 'capture properties that consist of only land, and these have a type of '02.R'', we can drop in a simple if statement to check the value of the property type. If it's not 02.R, print a message to the user (and log the same to the logfile), skip all remaining instructions, and continue on with the next row in the input file. (Otherwise, continue with the script).

14./17./20. Capture data as webdriver objects
In these three steps, we scrape the page looking for data, and grab it using a webdriver object. The caveat here is that if we try to move on to the next page, the web elements we just captured will disappear.

15./18./21. Capture the data items as python variables. In these steps, we do some simple processing on the data (trimming excess whitespace, removing comma and dollar signs from currency values, etc.), then store the result in a python variable. That way, once we move to the next screen, if the webdriver objects disappear (and they will), we have the data we need captured with python for writing to the data file.

16./19. Move to the next page
In these steps, we move between pages of data. Since the links are simple textual links, we locate them by the text specified to show on the page, then click the link to jump to that page. I would point out that the page load wait instruction we entered in step 9. applies to these page jumps, also. Processing will not continue until the page loads. if 'x' seconds specified in the line in step 9 have passed, the script will throw an exception.

22. Output the data
Here, we create a single string (by hand) of each data item we captured from the various pages. Then, we send that string to the output file.

23. We now increment the counter variable, and print a message to the user that the data for the specific parcel id has been captured. We also write a line of the same to the log file.

24. By clicking the "Account Search" link, we go back to the initial search page, thus setting us up for the next data item in the input file.

25. As with the code in the previous blog post, we clean up our data files prior to exit.

UPDATED XPATH IDENTIFICATION


The previous post leveraged a tool that after an update to Firefox, stopped working. I went out and located another tool, FirePath. One of the advantages of FirePath is that it integrates directly into FireBug (which, if you followed the previous post, you would already be using). FirePath is simply another tab in FireBug. When inspecting elements in a web page, simply highlight the element you are interested in, and click the FirePath tab to get the XPath reference.

WRAP-UP

And that's it. We now have a much more robust, and flexible script to scrape from the web. Of course, the script is not finished. There are still a wide array of errors that could occur, and thus would require an exception handler. For example I did run across a one-in-4000+/- instance in which the script broke, presumably do to a page load timeout (re-running the script starting with the parcel ID that previously caused an error succeeded).

REFERENCES


Selenium Python API Guide

Sunday, October 1, 2017

Web Scraping

Update: This describes very basic web scraping. An updated post, Web Scraping 2 describes some intermediate functionality.

Something I've wanted to learn how to do for some time is apply automation to websites to scrape data. By way of a use case, consider a website that has some collection of data you want, but you have to perform a couple of mouse clicks, and wait for pages loads to get at each item. Then, you have to copy the data you want, and paste it into some other medium that contains the entire set (a text file, or spreadsheet, perhaps). Doing this by hand is very tedious, and horribly inefficient. If, however, we can apply automation to a web browser, we can have the browser do the work for us, and come back when it's all done.

As an example I used the Costilla County, CO Assessor's website. On this site, with a set of parcel numbers, you can obtain a good deal of information on a piece of property. First, I'll describe the environment, then we'll look at the code.

THE ENVIRONMENT


I used a Linux PC for this exercise, and Mozilla Firefox (v. 55.0.2). In addition, I installed the following packages:
  • pip (python installer - required to install Selenium)
  • Selenium - the library that allows python to interact with the browser.
  • geckodriver - the interface between Mozilla Firefox and Selenium.
  • Chromium - used for locating complex data items.

Pip is installed with yum/rpm or apt, depending upon your distribution. Selenium is installed with pip. geckodriver is installed by simply unpacking the geckodriver compressed file, and copying it to a location in your environment's path (I just dropped it in /usr/local/bin). Obviously, you'll need python. Version 2.7 came pre-installed with my distribution, so I just used that. Finally, I created a directory in which to keep all my files together. The first is my script (scrape.py), my source data (a list of parcel numbers), and geckodriver's log (created automatically by geckdriver in the directory where the script is executed. We execute the script as you might expect:
$ python scrape.py

THE SCRIPT


# 1. Imports
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
import re

# 2. Do some prep work to set up the environment
driver = webdriver.Firefox()

# 3. Setup Input and Output files
with open('numbers.list') as infile:
    parcellist = infile.read().splitlines()
outfile = open('datafile.csv','w')
outfile.write('"Parcel No.","Size","Unit","Assessed","Actual","Legal Summary"\n')

# 4. Load the County page and get past the guest login
driver.get("http://69.160.37.111/assessor/taxweb/search.jsp")
driver.find_element_by_name('submit').click()

# 5. process each file in the list in a loop
for parcelNumber in parcellist:

    # 6. drop a short delay before checking for the parcel #
    time.sleep(3) 

    # 7. First action, find the field named ParcelNumberID
    parcelbox = driver.find_element_by_name('ParcelNumberID')

    # 8. paste the current number from our list
    parcelbox.send_keys(parcelNumber)

    # 9. Bring up the data page
    parcelbox.submit()
    time.sleep(1)
    driver.find_element_by_class_name("clickable").click()
   
    # 10. Capture data items
    actual=driver.find_element_by_xpath('//*[@id="middle"]/table/tbody/tr[2]/td[3]/table[2]/tbody/tr[2]/td[2]')
    newactual = re.sub('[$,]', '', actual.text)
    legal=driver.find_element_by_xpath('//*[@class="accountSummary"]/tbody/tr[2]/td/table/tbody/tr[4]/td')
    legalSummary = re.sub('Legal Summary ', '', legal.text)
    units=driver.find_element_by_xpath('//*[@class="accountSummary"]/tbody/tr[2]/td[3]/table[2]/tbody/tr[2]/td[4]')
    propertyType=driver.find_element_by_xpath('//*[@class="accountSummary"]/tbody/tr[2]/td[3]/table[2]/tbody/tr[2]/td[1]')
    assessed=driver.find_element_by_xpath('//*[@class="accountSummary"]/tbody/tr[2]/td[3]/table[2]/tbody/tr[2]/td[3]')
    assessedValue = re.sub('[$,]', '', assessed.text)
    unitOfMeasure=driver.find_element_by_xpath('//*[@id="middle"]/table/tbody/tr[2]/td[3]/table[2]/tbody/tr/th[4]')
   
    # 11. Output data
    stringData = '"' + parcelNumber + '",' + units.text + ',"' + unitOfMeasure.text + '",' + assessedValue + ',' + newactual + ',"' + propertyType.text + '","' + legalSummary + '"\n'
    outfile.write(stringData)

    # 12. Back to main search page
    driver.find_element_by_link_text('Account Search').click()

# 13. Cleanup
outfile.close


CODE WALK-THROUGH

1. We need to import modules for all of the work we're doing.

2. We create a WebDriver object with which we interact. This is the object we use to send our commands to Selenium code, and get information back from the browser.

3. This section is reading the set of parcel numbers in our file, and creating a list object. We'll iterate through that object further down to get data. We also create a file into which we'll write the data, and start it off with a header for the data items we wish to capture.

4. Now we start the real work. This section tells the browser to navigate to the starting webpage. On that page, you have the option to login as a guest, or as a user with an account (presumably, fee-based). We're going to login as a guest, so we locate the name of the login submit button, and tell the browser to click it.

Locating HTML Objects

This is probably a good time to talk about how we tell the browser to do something with one item in the page versus another. I used Mozilla's Inspector tool (accessible from the menu). This tool allows you to view the page source, and it will highlight the item in the page that the highlight line in the source is sitting on. By looking at the HTML tags around the highlighted line in the bottom of the screen, you may be able to pick out a 'name' or 'id' tag.

Using Inspector tool to locate the tags describing an item on the page.


If so, that makes it easy to pass into Selenium. In our test, the Login button has the name "submit", so that's what we used to to issue a click() action against.

5. Now we iterate through the list, copying each parcel number from our input file into the variable parcelNumber.

6. We drop in a sleep command to pause the script for just a moment. Without this, we get an error that the item we're lloking for doesn't exist on the page. Essentially, we're checking for the parcel number box before the page loads fully, so we'll slow things down just a bit.

7. We create an object (parcelbox) that represents the ParcelNumberID field in the webpage. Again, we used the Inspector tool to find the name. We could just as easily the driver.find_element_by_id function here. as the page designer set both the field's name property and id property to 'ParcelNumberID'.

8. In this step, we insert the current parcel number (parcelNumber variable) into the text field.

9. Now, we click the submit button. That brings up a page that lists all of the properties matching the search criteria. (Searching on things other than the parcel number key could result in multiple items). Each item in this list is tagged as being in the 'clickable' class. Since we're using the parcel number to search, we only ever get one response back, so we don't need to worry about multiple 'clickable' class objects, we just specify that we want to click the only 'clickable' class object no the page, which is a ling to the parcel's individual page.

10. Now we have a page full of data items, some of which we want to capture. Looking at the Inspector tool, these items are deeply embedded in multiple levels of tags on the page, and their name and id tags are not set. This presents a problem. We're going to leverage another tool exposed by Selenium: Xpath.

Obtaining Xpath References

Xpath is a standard for referencing individual items in a web page. This is the perfect tool for locating a specific cell of data in a large page with no uniquely identifying tags. Unfortunately, the syntax of Xpath is very complicated, so we use a Chrome extension to locate the Xpath reference for us. Xpath Generator 3.0.0 is the tool we use. Once it's installed, navigate to the county assessor page, search for a property, and bring up its data page. Next, click on the Xpath icon located in the browser. Click on the item you want to locate, and Xpath Generator will give you one or more xpath references to that item. Click on each reference to find the item that best locates the data you are looking for. Some will highlight just the data you're looking for, some will highlight the table or row it sits in. There may also be multiple xpaths to the specific data item. choose the one that seems best. By 'best' I mean can be used over and over in our script. If the xpath reference uses something that matches the text currently in that field, that would be a bad choice, as that data will likely change on the next iteration, thus no hit would be found. On the other hand, selecting a reference that looks like it uses row, column or cell numbers is a better choice, as the data will likely always be presented in the same place in the page for each property we are searching for.

Back to step 10. We create variables, and assign them the contents of each data item in the page that we are interested in. For some data items, we might want to do a bit of cleanup. For example, currency amounts get captured, then we remove any dollar signs or commas. That makes the currency amount much easier to work with.

11. Once we have all of our data items, we put them into a string (in this case, I'm also inserting commas and quotes so the file will end up being a .csv file), and output the string into our output file under the header row.

12. This is the last step in the loop. We need to click the Search button so we go back to the page we were on when the for loop started, and the entire process repeats until there are no more parcel number to search for.

13. Last step, always perform cleanup by closing out files. Yes, the python interpreter will detect any open files and close them, but it's a good habit to always close any file you open. (We did not try to close the input file object, because the way we opened it (with <variable> as file:) performs the close automatically as part of the process. An attempt to close the file would have produced an error.

So that's it. We not have a csv file full of data items that we can do something else with.

References

Xpath Tutorial
Page on how to locate data items with Selenium

Wednesday, August 23, 2017

Solar Eclipse Expedition

Heading Down for the Event


The day finally arrived, and it was time to head down south for the 2017 solar eclipse. Ed, Sydney and I all headed out around 8:30 a.m. on Sunday, August 20. The trip down was uneventful. We took state highways the entire way, in order to avoid any traffic buildup. We hit the Ohio River at Cave-in-Rock, IL around 4:30p.m. The trip was very boring (notwithstanding a little relief in the terrain down in Shawnee National Forest), so we took a quick side trip to Cave-in-Rock State park to see the cave. Back to it, and we touched down at my aunt's house around 6:30 p.m. We ended up going out to grab BBQ at Knoth's by the Kentucky Dam. It really is a crime to visit western Kentucky and not get BBQ.

The next morning, we got up, had breakfast, and headed over to another aunt's house for the event.

Setup


Setup took a good hour and a half. Using the coordinates obtained from Google Earth for the location, I was able to look up the location of the Sun during the event from where I would be positioned using the NOAA's Solar Position Calculator (azimuth = 161 deg., declination = 63.6 deg.) With this knowledge, I could line up the tripod to ensure the sun shield was correctly oriented. (This is visible in the image of the equipment that was taken very near totality: the shadows of the tripod legs show that it is lined up very well with the direction of the sun.)

My primary photography setup. Note the shorter rear leg on the tripod. I needed to tilt the entire apparatus back to ensure I could get the angle to the sun, and not drive the camera into the telescope's base.


I tried using the cameras on cell phones for other photography types, but the time-lapse on one stopped after a few minutes (the phone overheated), and the other phone, using the 12x zoom just didn't do a good job of capturing a clean image. So, most effort was placed on the Sony/Meade telescope setup.

The Equipment, and Preparation


This consisted of a Sony a3000 20 MP camera with a T2 adapter connecting into the Meade telescope adapter. I used the shorter ring along to try and reduce the size of the image on the sensor (hoping to get more of the corona). This attached to the back of the telescope. While sighting in, I attempted to start working on the focus with the electric focuser (the original from Meade meant for this scope). For some reason, I could hear the motor spinning, but the focus in the scope just would not change. I opted to remove the electric focuser, and revert back to the manual method (ultimately, the better decision). On the tripod, I mounted my home-made sun shield, then draped the telescope in a white t-shirt. I added a black t-shirt around the opening in the shield to minimise the amount of ambient light around the camera's LCD. Finally, I added the home-made solar filter, and it was time to start focusing.

When I tried this in July, it was very difficult; partly due to the heat, most mostly, it's just a difficult process. So first, a bit of background. The Sony camera is fully automatic (although it does support manual modes). Without a lens, however, the camera's software has no way to determine if the image is in focus. The telescope is where the focusing takes place, and of course, no integration with the camera. So the process to get the focus correct consisted of the following:

  1. Attempt to focus the image using the telecope's focusing knob while looking at the LCD.
  2. When I think it's right, capture an image.
  3. Turn off the camera, and remove the SD card.
  4. Insert the SD card into my laptop (sitting about 50' away in the shade).
  5. Open the image I just captured and zoom into something with fine detail (sun spots worked very well for this).
  6. Determine if the image is in the best possible focus.
  7. Eject the SD card (waiting for the OS to fully unmount it).
  8. Insert the SD card back into the camera, turn the camera on.
  9. If the image was in the best possible focus, stop, and don't touch the focus knob.
  10. If the image was not in the best possible focus, return to step 1.

Back in July, I went through this process 6 or 7 times, and nothing came out really well. This time, I had it on the 4th attempt. Had I continued to use the electric focuser (if it were working), I would likely have continued to adjust the focus a small amount throughout the event, ending up with a sizable amount of images that were out of focus. Manually focusing forced the discipline to not play with the knob during the event, as it is very difficult to reach under the shield.

The Event


The rest of the event was spent shooting images, and watching in awe. When the sun was about halfway covered, you could feel that the sun was not as hot, There was no discernable difference in temperature between the shade, and out in the sunlight. It was also visibly darker. The darkness progressed very slowly at first. Within the last 10 - 15 seconds before totality, however, you could see the light diminish, as if someone was turning down a dimmer switch on a light. The sky turned a beatiful shade of blue - somewhat similar to twilight. As reported, day creatures became silent as the darkness progressed, and night insects became much more vocal. When totality hit, I tried to get a picture with my cell phone camera, but the auto exposure adjustment ended up overexposing the image, making the sun look full. The stars and planets became visible, as did the corona, which was magnificent. It also appeared like sunset on the horizon in all directions which was really interesting. I also forgot one piece to my tripod, which prevented the use of that for my cell phone camera. I did get a wonderful surprise as I snapped the images of totality, however. With the brightness of the sun blocked out, you could see solar flares shooting off the surface of the sun. I was able to capture them, and it ended up being a real treat.

By this time, the heat in the air felt the same no matter whether you were in the shade, or in the sunlight. You can see we got the bonus of some sunspots.


As we got close to the sun coming back out, I hopped back on the camera to try to grab the 'diamond ring' image. Unfortunately, the sun had shifted in the frame (as it had been doing, expectedly, all day), and when I tried to re-orient the telescope, it paused, and I held the button to move left a little too long. I then moved it back to the right and down a bit, then snapped the image of the diamond ring. Alas, it was too late, and the flare was really much larger than I intended it to be. At this point, I continued to shoot the progression from totality as I had when the event started. This gave me the ability to do the progression image below.

Notice the solar flares on the right, and lower right. Awesome bonus for the event. This image is shows that there wasn't much to see for the corona; though I could have increased the ISO and/or shutter speed, and captured more. That said, with this setup, the sun is sop large in the frame, I would not have been able to see much.

My 'diamond ring' image. You might say this has a really big rock.



As much fun as this was, photographing the entire event spanning a few hours was very hot work. The temperature was into the 90s, and when my aunt checked the weather, the heat index was 105 degrees. The period of totality, went by very fast however. As I think back, it felt like it lasted 20 seconds.

Progression through the event.


Lessons Learned


What Went Right
  • Take more gear than you think you'll need.
  • Make a checklist of everything you want to take.
  • Practice, practice, practice. Most of the problems I had were averted because I ran into them over the summer leading up to the event.
  • I ended up using a fisheye lens on the time-lapse device. This allowed capturing the tree line and the arc of the sun (even the wide-angle was not sufficient to get both).
  • Share the event with friends and loved ones. This is one of those events that are so much more fun when shared, and I got to see it with my daughter which really made it much more special.


Opportunities for Improvement
  • I had problems with the electric focuser during practice, and again during the event. I will definitely not trust that piece of equipment again.
  • Snap more images. I used only about 1 Gb (on a 16Gb card). There was no reason I could not continue snapping away during totality.
  • Try to get the diamond ring before and after totality - two chances to get the image, rather than one.
  • Instead of having a solar filter that fits over the end of the telescope tube, I think a small apparatus that allows you to flip the screen in/out of place makes more sense. I dropped the filter when I removed it, and could have risked bad dust spots when I put it back in place.
  • Start preparing earlier. I missed grabbing the full sun prior to the start of the event because I was still working on the focusing exercise.
  • When trying to shoot a time lapse, make sure the device stays cool. A shield of some sort may have prevented the phone from shutting down early.
  • A better camera would be a good bet. The a3000 isn't bad, but Nikons and Canons are better suited to astrophotography. Having something that is viewable from the computer screen would be really helpful, as I would not have to run back and forth during the focusing exercise.
  • Locating on high ground would allow a better view of the horizon.
  • It might be worth getting the tracking software working. Much of the time behind the camera was spent adjusting azimuth and declination to keep the camera in frame.

Other Interesting Notes
  • This was viewed from Princeton, KY (37áµ’ 05' 28.6" N by 88áµ’ 01' 17.95W).
  • Telescope properties: f-stop: 13.5, focal length: 1350mm.
  • Temperature was in the low - mid 90s.

Parting Thoughts


I spent the day of the event with family. This was as important as the event itself. Aunt Jean provided accommodations while we were down there, and Aunt Barbara and Ron hosted the event (and threw together an amazing dinner of pot roast, potatoes & carrots, and cole slaw). With my dad and daughter participating also, it made for a really neat family event. To further illustrate the hospitality that you get down there, I give you the following. As the eclipse was proceeding, a man and his wife from Texas was driving slowly through the area. They had been locating the center line of totality on an app for their phone, and ended up on the street in front of the house. It seems they were chased away from another neighbor. We waved them over, offered them lunch, and had a nice visit with people we had never met. When you can share events like this with others, it makes for a sense of community, and and a much better story.

Wednesday, August 16, 2017

Indiana Dunes National Lakeshore Trail #10

I finally got back out to the Dunes to try Trail 10 again this week. I did a few things differently, which made the hike much easier. First, I didn't start out with the 3-Dune Challenge. That stole a good deal of energy I needed for the long runs. Second, I stuck to the beach, rather than hiking the constant up and downs in the dunes. Finally, I picked a day that was only 80 degrees.

The Nature Center, where most trailheads are located.


The hike started at the Nature Center. I took trail 7 through the wooded area to the lake front. From there, I traveled on the beach close to the water. Just a little tip: If you walk within a few feet of the water's edge, the sand retains a small amount of water that improves its rigidity, making for a much easier walk than walking in soft dry sand. The route along the beach offers a level hike with the waves ever present in the background. The views of the dunes are very picturesque, as well.

A view of the dunes from the beach


After a couple miles, the trail turned back into the woodland, and it was really nice. One downside: the mosquitoes were really bad. I sprayed repellent as soon as I entered the woods, which only kept them from landing and biting. The continued to swarm around my head for the bulk of the walk. The bugs notwithstanding, the trail is well maintained, and winding through very quiet woods.

A view of the wetlands south of Trail 10. Trail #2 heading in and around the wetlands was closed due to high water.


I did get to see a lizard, and a baby garter snake, though both were off into the brush before I could get my camera out.

One thing I did that was quite different this time was to shoot a video of the hike. I'll be posting that soon on YouTube and will link to it here when it's ready.


The profile is flat on the beach and in the woodlands, with a few small inclines entering each.


The 10 Km hike covers beach sands, and woodlands, and is the longest in the Dunes parks.

And now, my first hiking video. The audio in the beginning is a little low, as I underestimated the loss of volume due to the distance, and the sound of cicadas in the background. The camera is also a bit shaky, so next time I'll play with stabilization features.


 

References


Indiana Dunes National Lakeshore Trail map
 

Details

Min. Altitude: 167m
Max Altitude: 243m
Cumulative ascension: 180m
Distance: 10.1km (6.2 mi)
Duration: 2hr 51min
Temp: 80 deg F


Sunday, July 2, 2017

Indiana Dunes 3-Dune Challenge Hike


I woke up today with anticipation of a hike I have been contemplating all summer. The 3-Dune Challenge (created by the Indiana Dunes National Lakeshore as both outreach, and promotion of a healthier lifestyle) is a short trail traversing three large dunes, each extending over 150 feet above the level of Lake Michigan. Normally, one would travel by car (or by bike, if living nearby) into the Park. When approaching, the line of cars to enter extended nearly a mile. Near the end of the line was Dune Park train station, a stop for commuters working in Chicago. Parking there requires another 2 ¼ miles walk 1 mile to the Park entrance, and another 1¼ miles to the trailhead), but your arrival at the park gate will be well ahead of those you left in line.

A quick inquiry with one of the staff on the location was met first with a smile (in spite of the heat), and then with enthusiastic instructions on where to find the Nature Center, where most trails begin. At the Nature Center, once can find a small set of supplies, facilities, and a very helpful staff. Clearly, these rangers and staff members enjoy their work. Today’s hike would include not only the 3-Dune Challenge, but picking up Trail #10 for a longer, but more gradual extension to the hike. A short walk into the woods reveals the start of the 3-Dune Challenge.


Part 1: Three Dunes

Start of the 3-Dune Challenge
The first Dune is Mt. Jackson, at 176 feet above the lake level. It starts with a sudden and steep incline for a bit, then levels off for a while. Another steep incline leaves one at the top of Mt. Jackson with shoes full of sand. This second incline is over pure beach sand, soft and hot inclined directly toward the sun, and without shade. It offers a nice view of the lake, framed by trees. You have the luxury of sitting on a bench, if you want the sun beating down on you. Opting for a small clear spot under the trees offers shade.


Leaving the first peak is a quick jaunt down sandy trails. It’s hard not to end up running with the steepness of the trail. The decline is quickly replaced with another incline. Heading up Mt. Holden, 184 ft. above lake level, leaves little time for rest as the hiker is met with another difficult incline to the top. Here is a nice view of the lake with Chicago in the background. There is a particular smell to the sand dune. The mixture of the humus left by the under growth and the mild smell rising up from the sand heated by the sun mix to provide a very characteristic aroma. One trail leads down to the lake, the other on to the last of the dunes. Exiting Mt. Holding is met by another steep decline in thick sand.


There is no time at the bottom of the trail to take in a stroll. As soon as the decline is finished, it turns into another very steep incline. This one was tough. After the first incline, perhaps 60 feet, there is a leveling of the trail through the woods to a staircase finishing up the incline to the third dune. Tom is 192 feet above the level of the lake, and very steep. A view to the west reveals the lake, Chicago, and the many steel mills along the southern edge of Lake Michigan. Again, there are trails leading down to the beach, as well as a staircase heading down the west side of the dune.

From left to right, views from Mt. Jackson, Mt. Holder, and Mt. Tom.

Part 2: The Lakefront


At the bottom, it became evident that one of the trails leading to the beach was the better option, as going too far puts on on the road leading to the beach parking lot, and the throngs of beach goers. The landscape is lake to the north, followed by relatively flat sandy beach, which ends where the dunes begin. The dunes are marked by a sharp incline above the beach, with a mixture of trees, sand and saw grass. As a kid growing up in the area, I remember saw grass vividly. Being 1 ½ to 2 feet tall, it’s your friend in hide and seek. In the later summer months, however, as the grass begins to dry out from the heat, the sharp tips become needles that easily pierce the skin. After winding around the back of the beach, the trail map implies one is on Trail 10. There are no signs, however. Leaving the beach for the dunes reveals innumerable trails, however also not marked. Following the trails leads across many uphill and downhill paths, all consisting of very soft beach sand. Although the trails are a few hundred feet from the water’s edge, the sound of the waves are ever present, broken only by the power boats.

Looking out over the dunes at the lakefront. Trails wind all throughout the dunes above the beach.

Each step in this sand is like three over firm ground. The heat bears down from the sun, and back up from the sand. Every incline is followed by a rest in the nearest shady spot. Occasionally a rustle in the leaves is followed by the dashing of a lizard. With a healthy fear of humans, and the heat, they move so fast, it’s difficult to get a look at one. The largest are around 6 inches. When they do stop, their greenish-gray coloring becomes apparent, as do the black and white stripes running down their backs. Eventually a sign appears, marking Trail #9. A quick check of supplies reveals plenty of snacks, but water is in short supply - over half is gone. At this point, only the first third of the north side of the loop is complete. Trail #10 will have to wait for another day. Following the marker, the change takes the hiker south, into the wooded area of the Park.

Part 3: Woodland


Here, the ground became much firmer, which made hiking much easier. The firm and relatively level trail allows for a steady pace, in spite of tired muscles and sore joints. The shade is also a very welcome relief, cutting down on the need for water just a bit. Be prepared for mosquitoes, however. The humus retains moisture, and combined with the shade, makes a wonderful breeding ground.

The shade and firm ground made the last part of the trail much easier.

The wooded area is much quieter than the more popular 3-Dune area, or the beach. Even on the holiday weekend, there were only three encounters with other hikers. The remoteness also makes the back trails very quiet. The only sounds are the wind in the trees, the rustling squirrels, and the birds.

An elevation colored path.
Elevation profile of the path above.

Details

Min. Altitude: 156m
Max Altitude: 225m
Cumulative ascension: 325m
Distance: 9.8km (6.1 mi)
Duration: 3hr 24min
Temp: 85 deg F

Monday, May 29, 2017

More Astrophotography

We've had some pretty clear skies over the Memorial Day weekend, so I thought I'd go out and do a little imaging.

Around dusk, the crescent moon was stunning. I'm pleased that the blue came out in the image, and the burned in corners from the rim of the telescope give an unexpected depth to the image.


Early the next morning (around 2:00a), I went out to see if I could capture Saturn. This was much more difficult than imaging Jupiter, as it's not only smaller, but roughly twice as far from Earth as Jupiter. We could use a little more light, and magnification for this one.


Thursday, May 11, 2017

Brown County State Park: Taylor Ridge Trail (Trail #9)

Taylor Ridge Trail is the last of four hikes I made in a weekend while visiting family down south.

The official trail-head is in the Taylor Ridge Campground, and there is no parking other than for campers. So I picked up the trail at the end of the extension which is what brought me to Ogle Lake.

Ogle lake is where the extension begins.

When I entered the gate to the park, I ran into construction. They were having the roads repaired. After winding through the road patching, I got to the Ogle Lake parking lot, which was being entirely resurfaced. As a result, the first 1/4 mile of the had the faint smell of hot asphalt, and the constant beeping of construction equipment backing up. As soon as I topped the first ridge, however, that was gone.

As you approach the trail, There is a small opening in the trees. It gives the impression of a gateway into the trail.

The start of the Taylor Ridge Trail 9 Extension

The trail starts with a steep 8 foot incline that turns left and rises steadily up the side of the ridge. There are few switchbacks for most of the trail; most uphills will incline in one direction up to the top of the ridge. The temperature was around 62 - 64 deg. F. I started with a flannel shirt & by the time I finished the first 1/4 mile up the ridge, a t-shirt was plenty.

The wildlife tended to be small: squirrels, striped ground squirrels, and songbirds.

One of the locals checking out the intruder.


On the plus side, despite the recent rain, there were no mosquitoes. No bugs at all really, except for a deer fly that wouldn't leave me alone for 100 feet more or less. Vegetation consisted of a beautiful lush green forest, but sparse enough where you could see a few hundred feet down the sides of the hills. I counted what I think to be 3 different varieties of ferns.

Some of the different types of ferns that thrive along the trail

The streams were really cool. Stream beds were all sedimentary rock. In most places, the crushed pieces gathered to form the streambed, but every now and then, you run across a portion of the stream that flowed right over the flat layer of sedimentary rock exposed by erosion.

A section of stream that has eroded all the way down to the rock.


The trail quality was excellent. Trail was well-worn, and signs and posts were well placed to keep you on-trail. (There is no blazing on the trees).

More often, where logs had fallen across the trail, they were cut on each edge of the trail. There were 2 or 3 left from the recent storms that had not yet been cut. The $7 I paid for entry was well worth it, if only for the maintenance and care that went into keeping the trail up.

3 miles in, after crossing over two ridges, you hit the official trail 9 loop. I took it to the campground, then back to the loop to finish the trail. Total length was 8.01 miles.

Taylor Ridge Trail is Brown County Park's most difficult trail. The Property Map (a pdf on the State Park's website) has it listed as "rugged". The loop (the trail without the extension) was rated as "moderate" on the trail-head sign in the campground. While it was a workout, it wasn't too difficult, and I'd have a hard time thinking of it as rugged. Stream crossings tended to be simple, as the streams were, at most 3 or 4 inches deep, and had rocks to help keep your feet dry. In one place where the runoff eroded down about 5 feet, there was a bridge to help you across.

My overall impression, having now hiked multiple locations in both Indiana and Illinois, is that Indiana seems to do a better job maintaining trails, as well as publishing trail maps and information than does Illinois.




Details:
Min. Altitude: 162m
Max Altitude: 318m
Cumulative ascension: 517m
Distance: 13km (8.01mi)
Duration: 3hr 1min
Temp: 65 deg F

Shawnee National Forest: Indian Point Trail

Indian Point Trail is the third of four trails I hiked over a weekend while visiting family down south.

I had no intention of hiking Indian Point; I didn't even know it was there until I drove past it on the way to Observation Trail. I had some extra time to kill, however, so I thought I'd give it a go.

Trail head

The trail started out a little muddy. It was no doubt hit by the same storms that impacted the Panther's Den trails. It dried quickly as I gained elevation. It starts with a subtle uphill climb. You walk past a small pond, that we'll come back to later. Next, there's a moderately steep uphill climb. The trail levels out as it winds around the south edge of the mountain top with a few places to get really nice views. Unfortunately, the well worn left turn will cut out a good portion of the actual trail which continues on south for a while before returning to the mountaintop. You can see in the map below, the actual trail (a light-gray line) versus the trail I followed in blue. There are no markings indicating that the trail continues on straight.

A view of Garden of the Gods Wilderness northwest of the trail.

The trail has camp sights dotted along it with remnants of campfires. They are close enough where campers cannot really get much privacy from hikers passing by.

Vegetation is a sparse enough that you can easily see through the forest. There is a thin layer of soil sitting on top of the rock, so foliage isn't as lush as some others. The ground is covered with small plants along most of the trail, with more pine needles along the top.

Some of the flora along the trail.

I stopped for some water and a granola bar looking out over a rocky outcrop, and took a little time to sit and enjoy the view to the west.

Soaking in a view of the Garden of the Gods wilderness.

I continued on, and found what looked like a small trail going down the side of the mountain. I headed down a very steep path between the rocks to explore a little. I found some caves in the side of the mountain just under where I was sitting that residue from campfires in them.

There were a few caves like this tucked into the side of the mountain, just underneath the trail. Both had residue from campfires in them.


I went down a little further to where the path exited the rocks and went into woodland, then headed back up to the top.

Time to turn around and head back up. It's a 50' climb from here, and I'll end up at the top just above the tree.

The remainder of the trail is a winding path back around the top of the mountain to the pond, where I picked back up with the trail heading back to the trail-head.

This trail is not overly maintained, and not well marked. There are no blazes or signs, so it's up to you to figure out where to find the trail. It's generally not a problem, as it's well worn, and not easily confused with streams (see my earlier account of the Panther Den hike). As mentioned above, the lack of marking has the potential to cut your hike short.



Details:
Min Elevation: 768 ft.
Max elev.: 928
Distance: 1.37 mi.

Shawnee National Forest: Observation Trail

The Observation Trail is the second of four trails I hit last weekend while visiting family down south.



The Observation Trail is not really a hiking trail; rather, it's a walking path paved with sedimentary rock and cement mortar. It's really laid out more as a park service supported tourist attraction than a trail, but the views are really nice. The walking path is a quarter mile long, and about three feet wide in most spots. It includes benches and plaques describing the rock formations and geology of the region.

The first visible rock formation and the walking path

Garden of the Gods has some beautiful views, particularly for the mostly flat Midwest. I arrived at around 8:30 a.m. on a Sunday. There were only a few visitors, So photography opportunities were frequent.

Just minutes into the walk, a view to the north opens up. There are no guardrails on most of the outcroppings of rocks to stop you from going over the cliffs.

Looking north. Most of the visible land is part of the Shawnee Wilderness.


Looking south by southwest.


The Devil's Smokestack. This formation occurred as softer sandstone was eroded from around the rock.


Camel Back Rock bears a striking resemblance to a camel.