Work Smarter.

Learn how to Work Smarter

Thursday, July 20, 2017

kriging with QGIS

A brief HOWTO for geoscientists trying sort out kriging in one of QGIS's more recent versions (I'm using 2.18.10 with GRASS, downloads: https://www.qgis.org/fr/site/forusers/alldownloads.html).

The SAGA plugins are still installed, but are nowhere to be found - until the Processing plugin is activated:


Now you have a new Processing menu, where you can turn on the Toolbox like this:


 Giving you a new Toolbox over on the right (typical) side of your screen ... which even has a search function! Useful, since there are a LOT of different algorithms useful for us geotypes:

So now you have a full menu of geospatial algorithms, including those we've all studied in GIS courses using ESRI products; you can tweak the variogram model, block or cellsize (for different sampling resolutions).

One thing to note: these interpolations allow temporary files to be written, because we often "test and discard" our stat models. However if you don't remember to save your "good" interpolations as add them as a layer, when you quit - or, at least on my machine, QGIS crashes (all. the. time) - you lose your beautiful kriging. Attention.


That's it, perhaps en français later.

Tuesday, July 12, 2016

Oeup, not much here

Since I've been away from "pdxedm" for some years I realize Google hid the blog based on settings -- since it's linked from my (six years out of date) personal webpage, figured I should at least unhide my outdated blog. Until I motivate new stuff here (and there's lots to say), you can follow my rather-tedious progress on a Master's degree over here:
http://windmills.jnorville.com/

 Thanks for your visit, have a lovely day.

Friday, August 12, 2011

Playing with Fusion Tables

More APIs are published every day for speedy visualization of data.  Here's one I like:


This map was generated from a 2010 Quality of Life index published by InternationalLiving (http://www.internationalliving.com).  Check the source data here:
http://www.google.com/fusiontables/DataSource?dsrcid=105221&search=&cd=17

Wednesday, July 21, 2010

Why Database? Part 2

That example database (WhyDatabase.mdb) I was talking about -- you're going to need it today.  I'll include some screenshots of it so those without Microsoft Access can see this simple example.

Goal: store data in one place; refer to it from other places; simplify.  Here we'll just rename a grab sample from a well...

1. Open WhyDatabase.mdb.  You needn't worry about the "Security Warning" now -- that related to VBA, and proprietary MS Access goodies that we're not using (yet).

2. Make sure you're viewing All Access Objects.

3. Note that this database has four tables and one query.

We're going to open two of these objects: tblNotNormal, and qryFlatOutput.

Each object has 31 records; they contain identical data about a grab sample from MW-5 it seems.

We learned the field tech did not adhere to naming standards, so SampleName needs to be updated: "MW-5_grab" becomes "MW-5_G".  Or whatever.

In tblNotNormal -- a literal "flat file" -- this is a little more painful even than Excel -- copy/pasting each record takes ages... and there is no "rule" validating our changes for typos:

(Yes, Excel users would recognize a great opportunity for Fill Down.)

Note Access's "pencil" icon indicating which record is currently being edited.

Then give a go to qryFlatOutput -- magically simple:

Change the first record -- all the rest update when the record is "committed."

Can you misname any individual SampleName in qryFlatOutput?  Since you are updating one record, and the query points to it multiple times, the "data rule" says they all have to be names the same.

So without any of the potential "gotchas" of find-and-replace, without concern of Filling Down or other Excel unintentional error-inducing mass data crunching, you leveraged a data rule to update 31 records with a keystroke.  Strong work, friends.

Yeah, it was a little easy -- but take a look at the Relationship diagram, view design of the query, and get comfortable with what we're doing here.  Back in a couple days for more.

Friday, July 9, 2010

Why Database? Part 1

Imagine you have two piles of rocks; your clients wants them all in one pile.  (Those of you already rolling your eyes - welcome to consulting, people.)  Could be two bodies of water, whatever.  But you need to move stuff from A to B.  Now.

Let's explore some options.  You could 1) grab your bucket out of the truck and start moving rocks; 2) design/ build some nifty conveyance system (belt for rocks, or pipe if it's water, you get it) and let it do the work for you.  You see where this is going.

Option 1, or the Just Lift It approach.  This works well if:

  • this is a one-time job;
  • you're blessed with inexpensive help; or
  • some slop is acceptable (you're gonna drop a rock, splash some water, as you get tired).

Option 2, aka Process Engineering:

  • the pile of rocks will reappear tomorrow;
  • you are priced as a Boutique Consultancy; or
  • accuracy, thoroughness, whatever, is critically important.

Option 3.  Work Smarter -- know when and why to pick one of these ways of working over the other.  If this were as obvious as it seems we would do it.

How can we distinguish when it's time to react quickly to a client's need, and when an opportunity to improve processes?  Vision.  Ultimately, it seems to me, some well-defined, efficient, flexible processes should provide the best value to your clientbase.  I've found this challenging when I become (or my company's visionaries are) over-attached to either option.  They are two ways of working -- not complimentary, but exclusive of one another.

So this leaves the challenge of finding the point where one approach becomes more efficient and sustainable than the other.  You need to be able to measure the costs of each -- find where the cost-benefit (or time-effort) lines cross.

Most consultancies do a great job of measuring the individuals performance with respect to invoices -- your utilization -- which helps find slow typists, not-so-bright bulbs, people who spend more time on Facebook than OWRD websites.


Not many firms in our industry measure this.  But I think it's a concept worth playing with this weekend.  To be continued!

UPDATE: Yes, an example is a good idea; download this sample database (contains fake analytical data from mythical "MW-5").  This is a Microsoft Access MDB file (2002-03 version) which is what I figure most of us have; can be opened on SQL Server of course, imported to MySQL, Linux with MDB viewer and/or OpenOffice "Base"; has a couple small examples of normalized tables and not normal tables.

Tuesday, May 25, 2010

MS Access and data visualization.

So ... my personal opinion is Access isn't the very best way to perform data visualization.  Lots of reasons for this -- primary among them, it's not intended to create shared apps.  Which is generally why you visualize data -- to help someone else understand your brilliant, if slightly obfuscated, string of numbers.

But!  It's pretty quick to learn; there are lots knowledgeable and generous folk out there learning alongside; and, hey, it's the most ubiquitous database-like app most consultants have pre-installed from IT!  And, for those using Excel, really ... the trick is to bang the rocks together, guys!

That's a screen for checking out project site stuff -- like well depth and location info, well completion (screen, casing) zones, lithology data (including for each zone), and ... stuff.  Later I'll walk through table designs under these tools (something I'm a little more excited about).

If you click a construction interval (left, lower table) you get this gem:


If you filter out a well that has associated entries in the water level table, here's your view:


And, naturally, the #1 request -- "give me a way to map this point" interface:

The user selects any number of wells using the upper "filter" window, adding them to the output box; after hitting "map 'em" the points are displayed with A - Z tags (Google Maps style) in order of the output listbox.  If more than 25 points are sent to the map, the user is warned that mapped points get a little hard to see after each 25 have been added.

There's an option to output this cursor to either Excel or Google Earth KML (DAO).

The real joy of these?  Once the user has well-organized data in normalized tables, it's a snap to reuse the forms and report interfaces across any project.

Wednesday, May 19, 2010

Normalization

Existing noninferential, formatted data systems provide users with tree-structured files or slightly more general network models of the data... --EF Codd


I've skipped over discussions of data theory -- but there are a couple principles of data theory that are important.  And, since I'm not a DB guru, correct me as we go.  I find errors with previous projects all the time.

E.F. Codd, mathamatician, introduced normalization in 1970.  Normalization has a lot of interesting theoretical underpinnings, but, lest I lose the plot, in our frame of reference it's just a way to organize data to make corruption unlikely.  Here are two readable, more recent docs:
http://www.troubleshooters.com/littstip/ltnorm.html (wish I'd followed Litt's "Additional normalization tips" first time 'round!  Next entry.)
http://dev.mysql.com/tech-resources/articles/intro-to-normalization.html

We could go into 3NF, or BCNF (aka 3.5NF, or Heath normal form), but ...  you, in the back!  Wake up...

Data integrity is why we're doing this - set 3NF as a goal.  The only thing a database does FOR you is say "no" a lot -- like a good parent it sets the rules. And those guidelines should keep you from reporting an 8260 as an 8260B; from reporting a 2009 water elevation as a depth below an outdated 1993 measuring point elevation; stuff like that.  Okay, silly example time.
Home | My Schedule (Free/Busy) | Professional CV | Learn how to Work Smarter

A Little More Background

Friends & Followers