The Irresistible Allure of Pre-crastination

Harvard Business Review:

Working smarter not harder, can be an uphill battle to sell but it is right. I don’t always tackle the easiest items on my agenda first. I tend to tackle a few easy ones to get psyched up and then hold on to some easy tasks to break from more time consuming work to keep me psyched for the important and challenging work.

Originally posted on HBR Blog Network - Harvard Business Review:

There’s a fairly common maxim that “He who hesitates is lost.” It’s a curious proverb, not only because its origins are murky, but also because it might just be really bad advice for productivity. Recent research suggests that rushing to complete projects might actually harm our productivity, a phenomenon that’s been called “pre-crastination”—choosing a more difficult way to complete a task just to be able to mark the task complete.

This finding is the result of nine studies published together in Psychological Science by a team of Penn State University researchers led by David Rosenbaum. In each of the studies the researchers gave participants (Penn State undergraduate students) a simple choice: pick one of two buckets and carry it down an alleyway. While making the choice between the two buckets, participants were specifically told to pick whichever seemed easier for completing the task. The nine experiments varied the weight of…

View original 571 more words

#IoE The Internet of Everything (Things) #IoT

If you didn’t know it tech is rampant and wonton in its generation of buzzwords for all sorts of things, or more specifically for sexing up all sorts of things that are more than the sum of their parts. The singularity, Web 2.0, e-this, i-that, social, mobile, the cloud, big data, and the Internet of Everything (Things) to name a few. Sometimes these things overlap. Sometimes they overlap in a so many ways that a Venn diagram starts to need an expert algebraic topologies to explain. Lately, the buzz has been around the cloud, big data, and the internet of things. As a technology professional and in particular a database professional it is important to me to sort through the really cool things.

One of the things we now have an is internet of the light bulb. This is real unlike the hyper text coffee pot protocol which is a 16 year April fool’s joke the IETF created, or the Linux toaster which is a malware vector. Phillips  Hue is a light bulb (system) that represents the culmination of incremental improvements to lighting bringing together a number of preexisting lighting features and adding them to an API. Hue combines, timers, dimmers, color, and circuit switching to provide a system that allows us to further take for granted one of the greatest technical achievements of the last millennium. That of course is the internet of things, at least the command side of it. I wonder if people have it in them to take their beverages according to a queue, or perhaps a locational aware protocol that starts my coffee when I approach the break room during a certain period of time.

Other things there are internets of will have sensors. These will write data somewhere. Lately, it looks like some Hadoop file system will be the standard. I wish I were more abreast of the internals the way a storage engineer would be so I could form a better opinion of why HFL would be a better choice than WAFL for writing streams. I suspect it has to do primarily with Map Reduce (at least public domain Map Reduce) only being implemented on Hadoop and MongoDB (and …). Of course if we need to respond to events in real time we need to put our event triggers upstream from storage.

SQL Server Files, Drives, and “Performance”

Last night after our monthly PSSUG meeting our presenter Jason Brimhall found himself waiting for a cab at Microsoft’s Malvern PA Office. Having waited for a taxi in Philly’s suburbs before I felt bad for his wait that might not end so I took it on myself to offer him a ride to the airport Marriot. On the drive I took it upon myself to ask his opinion on different strategies for allocating database files in SQL Server. This is a topic I find myself drawn to and I welcome the opportunities to hear other people’s (especially a Microsoft Certified Master’s) opinion on the subject.

I tend to find myself drawn to a simple approach that increases the number of files proportional to the size of the database. I admit that this is a matter of convenience and portability that has its limits. While Microsoft publishes some file numbers as the maximum (4,294,967,295) and the number above which issues may occur (50,000), a much lower number of files will tend to practical depending on needs. In one case I have encountered I found that at about 200GB I struggled to get reasonable copy and move speeds even with ESEUTIL. This was of course a single site scenario with fiber channel SAN as target and destination. Other scenarios require much smaller file sizes as well as other tools for copying (XCOPY, ROBOCOPY). Copies over a WAN (should be over a VPN\VLAN, but that is a different discussion Sebastian Meine could comment on as a segway to his presentation on new security roles in SQL Server)… copies over a WAN will traverse network segments of varying speeds, few if any of which will ever approach the throughput of fiber SAN. In these cases it is a matter of balancing the need for a manageable number of files and folders and keeping files small enough to copy. But for each of these cases it is important to note that the performance is being measured on the dimensions of speed of database file copy and file manageability (as an intersection of scripting for file and folder creation and the flexibility of the physical data model to make use of files), these are not frequently dimension of performance that our users notice.

Jason pointed out that from the point of view of user any significant query slowness will be perceived as an outage, and was a point added to his case against the separation of data and indexes into separate files specifically relevant to cases of piecemeal file based recovery. Our discussion started with my asking him his opinion on the comparative merits of creating separate database files for each processor (core) vs. using IO affinity masks to delegating IO related processing to a specific processor. I feel it important to note that some NUMA systems have separate IO channels for each NUMA node in which case IO affinity would deprive those member processors those channels (I think). Jason was supportive of allocating files per processor in the more general case. He mentioned that it wasn’t necessary (or always possible) to have separate disks for these files to realize performance.

What are you thoughts?

Are you familiar with the proportional fill algorithm?

What do you think about the impact of using solid state local disk in tempdb in 2012 FCI? or tiered storage?

Proportional fill reading:

http://sqlserver-performance-tuning.net/?p=2552

http://www.patrickkeisler.com/2013/03/t-sql-tuesday-40-proportional-fill.html

http://technet.microsoft.com/en-us/library/ms187087(v=SQL.105).aspx

 

 

Kiwi Puts Its All-Purpose Wearable Up For Pre-Order, Aims To Be Everything To Everyone

thomaswmarshall:

Now your talking about wearables. Hype all you want about smart watches that only work with certain phones, but things like this (and perhaps cufflinks), and Martian Commander watches are what are going to make the difference. The new Mac Pro may be a revolution in design, that transforms a desktop computer into an objet d’art, but I doubt I could find a matching pair of shoes if someone called it a hat.

Originally posted on TechCrunch:

We’ve spoken to the folks from Kiwi Wearables before: Back in September we caught up with them at the Disrupt SF Hackathon, when they were preparing their platform and demonstrated what it could do with a sensor-laden prototype used as a gesture-based musical instrument. Now, Kiwi is ready to unveil its hardware, and make it available to consumers for pre-order.

The Kiwi Move is the product of its work to date, a small 1.6″ by 1.2″ gadget that’s only 0.35″ thick and weighs just a single ounce, but that contains an ARM Cortex M4 chip, a Bluetooth LE radio and 802.11b/g antenna, as well as an accelerometer, gyroscope, magnetometer, barometer and thermometer. It has 2GB of onboard storage, and can last 4 hours streaming data constantly, or 5 days under normal, periodic use. There’s an LED for displaying light-based notifications, and it ships with four native apps, plus a…

View original 399 more words

How Regular Exercise Helps You Balance Work and Family

Harvard Business Review:

In an earlier post I mentioned my new year’s resolutions. Exercise is one that I have succeeded at keeping more often than I have failed. It is for the very reasons discussed in this post that I find above all others it is the one to keep. And it is to the purpose of satisfying the 1st, 4th, and 5th of my resolutions that I got my TechnoGym Wellness Fob today.

Originally posted on HBR Blog Network - Harvard Business Review:

Matthew Beason is a well-respected executive at a non-profit with a multi-billion dollar endowment. On top of continual domestic travel, countless dinners with donors, and constant planning meetings, Matthew is also a married father of four children. While his work schedule sometimes leaves him exhausted, Matthew consistently attends school and athletic events and is, while at home, fully there for his family.

Likewise, Luke McKelvy, owner of newly formed McKelvy Wealth Management, has a busy schedule of meeting with current and prospective clients and setting up his new business. Luke is the married father of two children, twin boys under the age of two. Like Matthew, he manages to square the priority he places on his family’s happiness with the demands of work he considers important.

Matthew and Luke have pulled off the neat trick of successfully integrating work and life mainly through a skillful alignment of their priorities. But…

View original 830 more words

SQL SERVER – Copy Statistics from One Server to Another Server

thomaswmarshall:

I was watching SANMAN’s most recent video DEVOPS: Mission Impossible and it reminded me of Pinal Dave’s article here. THis is a great way to avoid moving data and diminish the weight of arguments for allowing development access to production databases and data.

Originally posted on Journey to SQL Authority with Pinal Dave:

I was recently working on a performance tuning project in Dubai (yeah I was able to see the tallest tower from the window of my work place). I had a very interesting learning experience there. There was a situation where we wanted to receive the schema of original database from a certain client. However, the client was not able to provide us any data due to privacy issues. The schema was very important because without having an access to underlying data, it was a bit difficult to judge the queries etc. For example, without any primary data, all the queries are running in 0 (zero) milliseconds and all were using nested loop as there were no data to be returned. Even though we had CPU offending queries, they were not doing anything without the data in the tables. This was really a challenge as I did not have access to production server data and…

View original 132 more words