May 28, 2015

Joerg MoellenkampEvent accouncement - Oracle Business Breakfast am 26. Juni 2015 in München - "Service Managment Facility"

May 28, 2015 05:45 GMT
As this event is in Germany and in german language, i will proceed in the respective language:

Am 26. Juni 2015 findet in München ein Business Breakfast statt. Das Thema ist eine Einführung in die Service Management Facility. Anmelden könnt ihr euch unter diesem Link.
Die Service Management Facility (SMF) von Solaris, obschon seit Version 10 enthalten, ist für die meisten Kunden immer noch ein Feld, das recht selten betreten wird und oft mit dem Schreiben eines init.d-Scripts umgangen wird. Dadurch verliert man jedoch Funktionalität. Dieses Frühstück will noch mal die Grundlagen der SMF aufrischen, Neuheiten erläutern, die in SMF dazu gekommen sind, Tipps und Tricks zur Arbeit mit SMF geben und einige eher selten damit in Verbindung gebrachte Features erläutern. So wird auch die Frage geklärt, was es mit dem /system/contract-mountpoint auf sich hat und wie man das dahinterstehende Feature auch ausserhalb des SMF gebrauchen kann.

Insbesondere werde ich auf das neue Solaris 11.2 Feature "SMF-Stencils eingehen", das vielen noch unbekannt ist.

May 27, 2015

Jeff SavitOracle VM Server for SPARC 3.2 now available for Solaris 10 control domains

May 27, 2015 16:49 GMT

Oracle has released Oracle VM Server for SPARC 3.2 packages for Solaris 10 control domains. The package can be downloaded from http://www.oracle.com/technetwork/server-storage/vm/downloads/index.html#OVMSPARC

Not all of the performance and functional enhancements of Oracle VM Server for SPARC 3.2 are available when used with Solaris 10. Oracle recommends that customers use Solaris 11, especially for the control domain, service and I/O domains. Note that future Oracle VM Server for SPARC releases will no longer support the running of the Oracle Solaris 10 OS in control domains. You can continue to run the Oracle Solaris 10 OS in guest domains, root domains, and I/O domains when using future releases. Solaris 10 guest domains can be used with Solaris 11 control domains, allowing interoperability while moving to Solaris 11. For additional details, please see the Release Notes.

Darryl GoveMisaligned loads profiled (again)

May 27, 2015 05:08 GMT

A long time ago I described how misaligned loads appeared in profiles of 32-bit applications. Since then we've changed the level of detail presented in the Performance Analyzer. When I wrote the article the time spent on-cpu that wasn't User time was grouped as System time. We've now started showing more detail - and more detail is good.

Here's a similar bit of code:

#include <stdio.h>

static int i,j;
volatile double *d;

void main ()
{
  char a[10];
  d=(double*)&a[1];
  j=100000000;
  for (i=0;i < j; i++) 
  {
    *d+=5.0;
  }
  printf("%f",d);
}

This code stores into a misaligned double, and that's all we need in order to generate misaligned traps and see how they are shown in the performance analyzer. Here's the hot loop:

Load Object: a.out

Inclusive       Inclusive       
User CPU Time   Trap CPU Time   Name
(sec.)          (sec.)          
1.131           0.510               [?]    10928:  inc         4, %i1
0.              0.                  [?]    1092c:  ldd         [%i2], %f2
0.811           0.380               [?]    10930:  faddd       %f2, %f0, %f4
0.              0.                  [?]    10934:  std         %f4, [%i2]
0.911           0.480               [?]    10938:  ldd         [%i2], %f2
1.121           0.370               [?]    1093c:  faddd       %f2, %f0, %f4
0.              0.                  [?]    10940:  std         %f4, [%i2]
0.761           0.410               [?]    10944:  ldd         [%i2], %f2
0.911           0.410               [?]    10948:  faddd       %f2, %f0, %f4
0.010           0.                  [?]    1094c:  std         %f4, [%i2]
0.941           0.450               [?]    10950:  ldd         [%i2], %f2
1.111           0.380               [?]    10954:  faddd       %f2, %f0, %f4
0.              0.                  [?]    10958:  cmp         %i1, %i5
0.              0.                  [?]    1095c:  ble,pt      %icc,0x10928
0.              0.                  [?]    10960:  std         %f4, [%i2]

So the first thing to notice is that we're now reporting Trap time rather than aggregating it into System time. This is useful because trap time is intrinsically different from system time, so it's worth displaying it differently. Fortunately the new overview screen highlights the trap time, so it's easy to recognise when to look for it.

Now, you should be familiar with the "previous instruction is to blame rule" for interpreting the output from the performance analyzer. Dealing with traps is no different, the time spent on the next instruction is due to the trap of the previous instruction. So the final load in the loop takes about 1.1s of user time and 0.38s of trap time.

Slight side track about the "blame the last instruction" rule. For misaligned accesses the problem instruction traps and its action is emulated. So the next instruction executed is the instruction following the misaligned access. That's why we see time attributed to the following instruction. However, there are situations where an instruction is retried after a trap, in those cases the next instruction is the instruction that caused the trap. Examples of this are TLB misses or save/restore instructions.

If we recompile the code as 64-bit and set -xmemalign=8i, then we get a different profile:

Exclusive       
User CPU Time   Name
(sec.)          
3.002           <Total>
2.882           __misalign_trap_handler
0.070           main
0.040           __do_misaligned_ldst_instr
0.010           getreg
0.              _start

For 64-bit code the misaligned operations are fixed in user-land. One (unexpected) advantage of this is that you can take a look at the routines that call the trap handler and identify exactly where the misaligned memory operations are:

0.480	main + 0x00000078
0.450	main + 0x00000054
0.410	main + 0x0000006C
0.380	main + 0x00000060
0.370	main + 0x00000088
0.310	main + 0x0000005C
0.270	main + 0x00000068
0.260	main + 0x00000074

This is really useful if there are a number of sites and your objective is to fix them in order of performance impact.

May 20, 2015

OpenStackLIVE WEBINAR (May 28): How to Get Started with Oracle OpenStack for Oracle Linux

May 20, 2015 18:32 GMT
Webinar title: Oracle VM VirtualBox to Get Started with Oracle OpenStack for Oracle Linux

Date: Thursday, May 28, 2015

Time: 10:00 AM PDT

Speakers: 
Dilip Modi, Principal Product Manager, Oracle OpenStack
Simon Coter, Principal Product Manager, Oracle VM and VirtualBox

You are invited to our webinar about how to get started with Oracle OpenStack for Oracle Linux. Built for enterprise applications and simplified for IT, Oracle OpenStack for Oracle Linux is an integrated solution that focuses on simplifying the building of a cloud foundation for enterprise applications and databases. In this webcast, Oracle experts will discuss how to use Oracle VM VirtualBox to create an Oracle OpenStack for Oracle Linux test environment and get you started learning about Oracle OpenStack for Oracle Linux.

Register today 

May 19, 2015

OpenStackOracle Solaris gets OpenStack Juno Release

May 19, 2015 16:02 GMT

We've just recently pushed an update to Oracle OpenStack for Oracle Solaris. Supported customers who have access to the Support Repository Updates (SRU) can upgrade their OpenStack environments to the Juno release with the availability of SRU 11.2.10.5.0.

The Juno release includes a number of new features, and in general offers a more polished cloud experience for users and administrators. We've written a document that covers the upgrade from Havana to Juno. The process to upgrade involves some manual administrator to copy and merge OpenStack configuration across the two releases, and upgrade the database schemas that the various services use. We're working hard to provide a more seamless upgrade experience, so stay tuned!

-- Glynn Foster

The Wonders of ZFS StorageOracle ZFS and OpenStack: We’re Ready … and Waiting

May 19, 2015 16:00 GMT

After Day 1 of the OpenStack Summit, the ongoing debate rages as it does will all newish things:  Is OpenStack ready for prime time? The heat is certainly there (the OpenStack-savvy folks will see what I did there), but is the light? Like anything in tech, it depends.  The one thing that is clearly true is that the movement itself is on fire.

The "yes" side says "OpenStack is ready, but YOU aren't!!!"  The case being made on that side is: "OBVIOUSLY if you simply throw OpenStack over a bunch of existing infrastructure and "Platform 2" applications, you will fail.  If instead, you build an OpenStack-proof infrastructure, and then run OpenStack on top of it, you can succeed."

The "No" side says "That sounds hard! And isn't at least part of the idea here to get to a single, simple, consolidated dashboard? I want THAT because THAT sounds easier."

Who is right?  Both, of course.  But the "yes" side essentially admits that the answer is still sort of "no", because the "no" side is right that OpenStack probably still too hard for shrink-wrapped, make-life-in-my-data-center-easier use.  What the "yes" side is really saying is that some of the issues OpenStack solves for today are worth solving for despite the fact that they are hard.  Walmart's e-commerce site is a big example.

Here in Oracle ZFS Storage land, we get asked to explain this "yes" or "no" problem to our customers every day (several of whom have presence at the summit), and we tell most of them that the answer is "not yet".  But keep an eye on it, because "yes" will be a very useful thing when it arrives.  For our part, we came here to the Vancouver summit saying the following:

In the language of the Gartner hype cycle, I think OpenStack is entering the notorious "trough of disillusionment" (Gartner doesn't have a specific mention of OpenStack on their curve). That's fine.  All great technological advances must necessarily pass through this stage.  Our plan is to keep developing while the world figures it all out, and to be there with the right answers when we all get to the other side.

OpenStackJoin us at the Oracle OpenStack booth!

May 19, 2015 14:28 GMT

We've reached the second day of the OpenStack Summit in Vancouver and our booth is now officially open. Come by and see us and talk about some of the work that we've been doing at Oracle - whether it's integrating a complete distribution of OpenStack into Oracle Linux and Oracle Solaris, Cinder and Swift storage on the Oracle ZFS Storage Appliance, integration with Swift and our Oracle HSM tape storage product, and how to quickly provision Oracle Database 12c in an OpenStack environment. We've got a lot of demos and experts there to answer your questions.

The Oracle sponsor session is on today also. Markus Flierl will be talking about "Making OpenStack Secure and Compliant for the Enterprise" at 2:50-3:30pm Tuesday Room 116/117. Markus will talk about the challenges of deploying an OpenStack cloud while still meeting critical secure and compliance requirements, and how Oracle can help you do this.

And in case anyone asks, yes, we're hiring!

OpenStackHow to setup a HA OpenStack environment with Oracle Solaris Cluster

May 19, 2015 14:13 GMT

The Oracle Solaris Cluster team have just released a new technical whitepaper that covers how administrators can use Oracle Solaris Cluster to set up a HA OpenStack environment on Oracle Solaris.

Providing High Availability to the OpenStack Cloud Controller on Oracle Solaris with Oracle Solaris Cluster

In a typical multi-node environment in OpenStack it's important that administrators can set up infrastructure that is resilient to service or hardware failure. Oracle Solaris Cluster is developed in lock step with Oracle Solaris to provide additional HA capabilities and is deeply integrated into the platform. Service availability is maximized with full orchestrated disaster recovery for enterprise applications in both physical and virtual environments. Leveraging these core values, we've written some best practices for how you integrate clustering in an OpenStack with a guide that initially covers a two node cloud controller architecture. Administrators can then use this as a basis for a more complex architecture spanning multiple physical nodes.

-- Glynn Foster

The Wonders of ZFS StorageOracle ZFS Storage Intelligent Replication Compression

May 19, 2015 02:15 GMT

Intelligent Is Better ....

Remote replication ensures robust business continuity and disaster recovery protection by keeping data securely in multiple locations. It allows your business to run uninterrupted and provides quick recovery in case of a disaster such as a fire, flood, hurricane or earthquake. Unfortunately, long distance replication can often be limited by poor network performance, varying CPU workloads and WAN costs.

What’s needed is intelligent replication that understands your environment and automatically optimizes for performance and efficiency, on-the-fly, before sending data over the wire. Intelligent replication that constantly monitors your network speeds, workloads and overall system performance and dynamically tunes itself for best throughput and cost with minimum impact to your production environment.

Oracle ZFS Storage Appliance Intelligent Replication Compression

Oracle’s ZFS Storage Replication with Intelligent Compression does just that. It increases replication performance by intelligently compressing data sent over the wire for optimum speed, efficiency and cost. It monitors ongoing network speeds, CPU utilization and network throughput and then dynamically adjusts its compression algorithms and replication stream thread counts to deliver best replication performance and efficiency, even with varying network speeds and changing workloads. This adaptive intelligence and dynamic auto-tuning allows ZFS Storage Appliance Replication to run on any network with increased speed and efficiency, while minimizing overall system impact and WAN costs.



Oracle’s Intelligent Replication Compression utilizes Oracle’s unique Adaptive Multi-Threading and Dynamic Algorithm Selection for replication compression and replication streams that continuously monitors (every 1MB) network throughput, CPU utilization and ongoing replication performance. Intelligent algorithms then automatically adjust the compression levels and multi-stream thread counts to optimize network throughput and efficiency. It dynamically auto-tunes compression levels and thread counts to fit changing network speeds and storage workloads, such as high compression for slow/bottlenecked networks and fast compression for fast networks or high CPU utilization workloads. It offers performance benefits in both slow- and high-speed networks when replicating various types of data, while optimizing overall system performance and reducing network costs.

Intelligent Replication Compression can lead to significant gains in replication performance and better bandwidth utilization in scenarios where customers have limited bandwidth connections between multiple ZFS Storage sites and the WAN equipment (such as WAN accelerator) does not provide compression. Up to 300% increases in replication speeds are possible, depending on network speeds, CPU utilization and data compressibility. Best of all, Intelligent Replication Compression comes free with the ZFS OS8.4 software release.

What About Offline Seeding?

Oracle’s ZFS Storage Replication is based on very efficient snapshot technology, with only delta changes sent over the wire. This can be done continuously, scheduled, or on-demand. Intelligent Replication Compression makes this very fast and efficient, but, what about the initial or full replica that could involve sending a very large amount of data to a remote site? Transmitting very large amounts of data long distances over the WAN can be both costly and time consuming. To address this, the ZFS Storage Appliance allows you to “seed” or send a full replication update off-line. You can do this by either doing a local copy to another ZFS Storage Appliance and then shipping it to a remote site, or by using an NFS server (JBOD/Disk sets) as a transport medium to send to another existing remote ZFS Storage Appliance. Incremental replicas can then be done fast and inexpensively over the WAN. This saves both time and money when setting up a remote ZFS DR site or needing to move large amounts of data efficiently. 

Summary

Superior remote replication is all about speed, efficiency and intelligence. Speed, so you can do it fast. Efficiency, so it doesn’t cost you an arm and a leg in WAN costs. Intelligence, so it dynamically optimizes itself for your ever-changing environment to achieve the highest performance at the lowest cost. Oracle ZFS Storage Replication with Intelligent Compression does all of that, and more.


May 18, 2015

The Wonders of ZFS StorageOracle ZFS Storage Powers the Oracle SaaS Cloud

May 18, 2015 16:15 GMT

On the plane to OpenStack Summit, I was thinking about what we on Oracle ZFS Storage team have been saying about cloud storage, and how Oracle's cloud strategy internally (building the world's most successful Software-as-a-Service company) maps to our thinking. If you haven't followed the SaaS trends, Oracle's cloud has grown well beyond the recreational stage.  We're killing it, frankly, and it's built on Oracle ZFS Storage.


The cliche is that there's no clear definition for cloud (or maybe it's that there are a bunch of them). I disagree.  I think that as typically happens, people have done their best to twist the definition to match whatever they already do.  Watch Larry Ellison's CloudWorld Tokyo keynote (there's a highlights video/but watch the whole thing).  At at 22 minutes in, he walks you through how real cloud applications work.  

What I'm thinking about relative to storage architecture is this notion that next-generation "cloud" storage can just be a bunch of commodity disks (think Ceph, for example), where you copy the data three times and are done with it.  OpenStack Swift works this way. In the Hadoop/Big Data world, this is conventional wisdom.  But as the amount of data people are moving grows, it's simply hasn't turned out to be the case.  In the cloud, we're seeing the same bottlenecks that plague hyperconsolidation in the enterprise:  Too many apps trying to get to the same spindle at the same time, leading to huge latencies and unpredictable performance.  People are deploying flash, in response but I'd argue that's the blunt force solution. 

We've learned at Oracle, and have demonstrated to our customers, that super fast, super intelligent caching is the answer.  Likewise, our friends at Adurant Technologies have shown that once your map reduce operations hit a certain scale point, Hadoop runs faster on external storage than it does on local disk.

Turns out that you can't just jump to commodity hardware and expect optimal storage efficiency.

EMC and NetApp simply aren't going to explain all of this to you.  From afar, they are hitting the right beats publicly, but look like they are flopping around looking for a real answer.  Their respective core storage businesses (FAS and VNX specifically) are flagging in the face of cloud.  Their customers are going where they can't.

And indirectly, they are coming to us.  Whether they are buying Oracle Exadata and Exalogic with Oracle ZFS Storage to turbocharge their core applications, moving to Oracle's massively expanding IaaS/PaaS/SaaS clouds, or discovering how they can get 10x efficiency by putting Oracle ZFS Storage in their own data center, they are moving away from stuff that just doesn't work right for modern workloads.

So, we're here at OpenStack, partly to embrace what are customers are hoping will be the long-sought Holy Grail of the Data Center (a single, consolidated cloud nerve center), and we're feeling rather confident.  We have the the right answer, and we know we're getting to critical mass in the market.

If you happen to be in Vancouver this week, drop by Booth #P9 and we'll tell you all about it.

May 15, 2015

Security BlogSecurity Alert CVE-2015-3456 Released

May 15, 2015 19:52 GMT

Hi, this is Eric Maurice.

Oracle just released Security Alert CVE-2015-3456 to address the recently publicly disclosed VENOM vulnerability, which affects various virtualization platforms. This vulnerability results from a buffer overflow in the QEMU's virtual Floppy Disk Controller (FDC).

While the vulnerability is not remotely exploitable without authentication, its successful exploitation could provide the malicious attacker, who has privileges to access the FDC on a guest operating system, with the ability to completely take over the targeted host system. As a result, a successful exploitation of the vulnerability can allow a malicious attacker with the ability to escape the confine of the virtual environment for which he/she had privileges for. This vulnerability has received a CVSS Base Score of 6.2.

Oracle has decided to issue this Security Alert based on a number of factors, including the potential impact of a successful exploitation of this vulnerability, the amount of detailed information publicly available about this flaw, and initial reports of exploit code already “in the wild.” Oracle further recommends that customers apply the relevant fixes as soon as they become available.

Oracle has also published a list of Oracle products that may be affected by this vulnerability. This list will be updated as fixes become available.

The Oracle Security and Development teams are also working with the Oracle Cloud teams to ensure that the Oracle Cloud teams can evaluate these fixes as they become available and be able to apply the relevant patches in accordance with applicable change management processes in these organizations.

For More Information:

The Security Alert Advisory is located at

http://www.oracle.com/technetwork/topics/security/alert-cve-2015-3456-2542656.html

The list of Oracle products that may be affected by this vulnerability is published at http://www.oracle.com/technetwork/topics/security/venom-cve-2015-3456-2542653.html

OpenStackDatabase as a Service with Oracle Database 12c, Oracle Solaris and OpenStack

May 15, 2015 16:32 GMT

Just this morning Oracle announced a partnership with Mirantis to bring Oracle Database 12c to OpenStack. This collaboration enables Oracle Solaris and Mirantis OpenStack users to accelerate application and database provisioning in private cloud environments via Murano, the application catalog project in the OpenStack ecosystem. This effort brings Oracle Database 12c and Oracle Multitenant deployed on Oracle Solaris to Murano—the first Oracle cloud-ready products to be available in the catalog.

We've been hearing from lots of customers wanting to quickly deploy Oracle Database instances in their OpenStack environments and we're excited to be able to make this happen. Thanks to Oracle Database 12c and Oracle Multitenant, users can quickly create new Pluggable Databases to use in their cloud applications, backed by the secure and enterprise-scale foundations of Oracle Solaris and SPARC. What's more, with the upcoming generation of Oracle systems based on the new SPARC M7 processors, users will get automatic benefit of advanced security, performance and efficiency of Software in Silicon with features such as Application Data Integrity and the Database In-Memory Query Accelerator.

So if you're heading to Vancouver next week for the OpenStack Users and Developers Summit, stop by booth P9 and P7 to see a demo!

Update: (19/05/15) A technical preview of our work with Murano is now available here on the OpenStack Application Catalog.

May 13, 2015

Darryl GoveMisaligned loads in 64-bit apps

May 13, 2015 21:54 GMT

A while back I wrote up how to use dtrace to identify misaligned loads in 32-bit apps. Here's a script to do the same for 64-bit apps:

#!/usr/sbin/dtrace -s

pid$1::__do_misaligned_ldst_instr:entry
{
  @p[ustack()]=count();
} 

Run it as './script '

Marcelo LealMobile first, Cloud first…

May 13, 2015 16:54 GMT
Cloud Computing (Source: Wikipedia - CC)


Hi there! Oh Gosh, it’s really, really cool to be able to write a few words on this space again! I’m very excited to continue the path I choose of helping companies of all sizes to embrace the cloud strategy and get all the benefits of an utility IT. Right...
Read more »

May 12, 2015

Jeff SavitOracle Virtual Compute Appliance backup white paper

May 12, 2015 20:17 GMT

The white paper Oracle Virtual Compute Appliance Backup Guide has been published.

This document reviews Virtual Compute Appliance architecture, describes automated internal system backups of software components, and describes how to backup to external storage Oracle VM repositories, database, and virtual machine contents, and how to perform recovery of those components.

May 05, 2015

Darryl GoveC++ rules enforced in Studio 12.4

May 05, 2015 19:04 GMT

Studio 12.4 has improved adherence to the C++ standard, so some codes which were accepted by 12.3 might get reported as errors with the new compiler. The compiler documentation has a list of the improvements and examples of how to modify problem code to make it standard compliant.

May 03, 2015

Garrett D'AmoreMacOS X 10.10.3 Update is *TOXIC*

May 03, 2015 00:35 GMT
As a PSA (public service announcement), I'm reporting here that updating your Yosemite system to 10.10.3 is incredibly toxic if you use WiFi.

I've seen other reports of this, and I've experienced it myself.  What happened is that the update for 10.10.3 seems to have done something tragically bad to the WiFi drivers, such that it completely hammers the network to the point of making it unusable for everyone else on the network.

I have late 2013 iMac 27", and after I updated, I found that other systems started badly badly misbehaving.  I blamed my ISP, and the router, because I was seeing ping times of tens of seconds!
(No, not milliseconds, seconds!!!  In one case I saw responses over 64 seconds.)  This was on other systems that were not upgraded.  Needless to say, that basically left the network unusable.

(The behavior was cyclical -- I'd get a few tens of seconds where pings to 8.8.8.8 would be in the 20 msec range, and then it would start to jump up very quickly until maxing up around a minute or so.  It would stay there for a minute or two, then rest or drop back to sane times.   But only very briefly.)

This was most severe when using a 5GHz network.  Switching down to 2.4GHz reduced some of the symptoms -- although still over 10 seconds to get traffic through and thoroughly unusable for a wide variety of applications.

There are reports that disabling Bluetooth may alleviate this, and also some people reported some success with clearing certain settings files.  I've not tried either of these yet.  Google around for the answer if you want to.  For now, my iMac 27" is powered off, until I can take the chance to disrupt the network again to try these "fixes".

Apple, I'm seriously seriously disappointed here.  I'm not sure at all how this testing got past you, but you need to fix this.  Its absolutely criminal that applying a recommended update with security critical fixes in it should turn my computer into a DoS device for my local network.  I'm shocked that several days later I've not seen a release update from Apple to fix this critical problem.

Anyway, my advice is, if possible, hold off for the update to 10.10.3.  Its tragically, horribly toxic not just to the upgraded device, but probably to the entire network it sits on.  I'm a little astounded that a bug in the code could hose an entire WiFi network as badly as this does -- I would have previously thought this impossible (and this was part of the reason why it took a while to diagnose this down to the computer -- I thought the ridiculous ping responses had to be a problem with my upstream provider!)

I'll post an update here if one becomes available.

April 29, 2015

Glynn FosterManaging Oracle Solaris systems with Puppet

April 29, 2015 01:21 GMT

This morning I gave a presentation to the IOUG (Independent Oracle Users Group) about how to manage Oracle Solaris systems using Puppet. Puppet was integrated with Oracle Solaris 11.2, with support for a number of new resources types thanks to Drew Fisher. The presentation covered the challenges in today's data center, some basic information about Puppet, and the work we've done to integrate it as part of the platform. Enjoy!

April 28, 2015

Darryl GoveSPARC processor documentation

April 28, 2015 16:39 GMT

The documentation for older SPARC processors has been put up on the web!

Robert MilkowskiZFS L2ARC - Recent Changes in Solaris 11

April 28, 2015 15:00 GMT
There is an excellent blog entry with more details on recent changes to ZFS L2ARC in Solaris 11.

Roch BourbonnaisIt is the Dawning of the Age of the L2ARC

April 28, 2015 13:12 GMT
One of the most exciting things that have gone into ZFS in recent history has been the overhaul of the L2ARC code. We fundamentaly changed the L2ARC such that it would do the following:

Let's review these elements, one by one.

Reduced Footprint

We already saw in this ReARC article that we dropped the amount of core header information from 170 bytes to 80 bytes. This means we can track more than twice as much L2ARC data as before using a given memory footprint. In the past, the L2ARC had trouble building up in size due to its feeding algorithm, but we'll see below that the new code allows us to grow the L2ARC and use up available SSD space in its entirety. So much so that initial testing revealed a problem: For small memory configs with large SSDs, the L2ARC headers could actually end up filling most of the ARC cache and that didn't deliver good performance. So, we had to put in place a memory guard for L2 headers which is currently set to 30% of the ARC. As the ARC grows and shrinks so does the maximum space dedicated to tracking the L2ARC. So, a system with 1TB of ARC cache, then up to 300GB if necessary could be devoted to tracking the L2ARC. With the 80 bytes headers, this means we could track a whopping 30TB of data assuming 8K blocksize. If you use 32K blocksize, currently the largest blocks we allow in L2ARC, then that grows up to 120TB of SSD based auto-tiered L2ARC. Of course, if you have a small L2ARC the tracking footprint of the in-core metadata is smaller.

Persistent Across Reboot

With that much tracked L2ARC space, you would hate to see it washed away on a reboot as the previous code did. Not so anymore, the new L2ARC has an on-disk format that allows it to be reconstructed when a pool is imported. That new format tracks the device space in 8MB segments for which each ZFS blocks (DVAs for the ZFS geeks) consumes 40 bytes of on-SSD space. So reusing the example of an L2ARC made up of only 8K-sized blocks, each 8MB segments could store about 1000 of those blocks consuming just 40K of on-SSD metadata. The key thing here is that to rebuild the in-core L2ARC space after a reboot, you only need to read back 40K, from the SSD itself, in order to discover and start tracking 8MB worth of data. We found that we could start tracking many TBs of L2ARC within minutes after a reboot. Moreover we made sure that as segment headers were read in, they would immediately be made available to the system and start to generate L2ARC hits, even before the L2ARC was done importing every segments. I should mention that this L2ARC import is done asynchronously with respect to the pool import and is designed to not slow down pool import or concurrent workloads. Finally that initial L2ARC import mechanism was made scalable with many import threads per L2ARC device.

Better Eviction

One of the benefits of using an L2ARC segment architecture is that we can now weigh them individually and use the least valued segment as eviction candidate. The previous L2ARC would actually manage L2ARC space by using a ring buffer architecture: first-in first-out. It's not a terrible solution for an L2ARC but the new code allows us to work on a weight function to optimise eviction policy. The current algorithm puts segments that are hit, an L2ARC cache hit, at the top of the list such that a segment with no hits gets evicted sooner.

Compressed on SSD

Another great new feature delivered is the addition of compressed L2ARC data. The new L2ARC stores data in SSDs the same way it is stored on disk. Compressed datasets are captured in the L2ARC in compressed format which provides additional virtual capacity. We often see a 2:1 compression ratio for databases and that is becoming more and more the standard way to deploy our servers. Compressed data now uses less SSD real estate in the L2ARC: a 1TB device holds 2TB of data if the data compresses 2:1. This benefit helps absorb the extra cost of flash based storage. For the security minded readers, be reassured that the data stored in the persistent L2ARC is stored using the encrypted format.

Scalable Feeding

There is a lot to like about what I just described but what gets me the most excited is the new feeding algorithm. The old one was suboptimal in many ways. It didn't feed well, disrupted the primary ARC, had self-imposed obsolete limits and didn't scale with the number of L2ARC devices. All gone.

Before I dig in, it should be noted that a common misconception about L2ARC feeding is assuming that the process handles data as it gets evicted from L1. In fact the two processes, feeding and evicting, are separate operations and it is sometimes necessary under memory pressure to evict a block before being able to install it in the L2ARC. The new code is much much better at avoiding such events; it does so by keeping it's feed point well ahead of the ARC tail. Under many conditions, when data is evicted from primary ARC it is after the L2ARC has processed it.

The old code also had some self-imposed throughput limit that meant that N x L2ARC devices in one pool, would not be fed at proper throughput. Given the strength of the new feeding algorithm we were able to remove such limits and now feeding scales with number of L2ARC devices in use. We also removed an obsolete constraint in which read I/Os would not be sent to devices as they were fed.

With these in place, if you have enough L2ARC bandwidth in the devices, then there are few constraints in the feeder to prevent actually capturing 100% of eligible L2ARC data1. And capturing 100% of data is the key to actually delivering a high L2ARC hit rate in the future. By hitting in L2, of course you delight end users waiting for such reads. More importantly, an L2ARC hit is a disk read I/O that doesn't have to be done. Moreover, that saved HDD read is a random read, one that would have lead to a disk seek, the real weakness of HDDs. Therefore, we reduce utilization of the HDDs, which is of paramount importance when some unusual job mix arrives and causes those HDDs become the resource gating performance: A.K.A crunch time. With a large L2ARC hit count, you get out of this crunch time quicker and restore proper level of service to your users.

Eligibility

The L2ARC Eligibility rules were impacted by the compression feature. The max blocksize considered for eligibility was unchanged at 32K but the check is now done on compressed size if compression is enabled. As before, the idea behind an upper limit on eligible size is two-fold, first for larger blocks, the latency advantage of flash over spinning media is reduced. The second aspect of this is that the SSD will eventually fill up with data. At that point, any block we insert in the L2ARC requires an equivalent amount of eviction. A single large block can thus cause eviction of a large number of small blocks. Without an upper cap on block size, we can face a situation of inserting a large block for a small gain with a large potential downside if many small evicted blocks become the subject of future hits. To paraphrase Yogi Berra: "Caching decisions are hard."2.

The second important eligibility criteria is that blocks must not have been read through prefetching. The idea is fairly simple. Prefetching applies to sequential workloads and for such workloads, flash storage offers little advantage over HDDs. This means that data that comes in through ZFS level prefetching is not eligible for L2ARC.

These criteria leave 2 pitfalls to avoid during an L2ARC demo, first configuring all datasets with 128K recordsize and second trying to prime the L2ARC using dd-like sequential workloads. Both of those are by design workloads that bypasse the L2ARC. The L2ARC is designed to help you with disk crunching real workloads, which are those that access small blocks of data in random order.

Conclusion : A Better HSP

In this context, the Hybrid Storage Pool (HSP) model refers to our ZFSSA architecture where data is managed in 3 tiers:

  1. a high capacity TB scale super fast RAM cache;
  2. a PB scale pool of hard disks with RAID protection;
  3. a channel of SSD base cache devices that automatically capture an interesting subset of the data.
And since the data is captured in the L2ARC device only after it has been stored in the main storage pool, those L2ARC SSDs do not need to be managed by RAID protection. A single copy of the data is kept in the L2ARC knowing that if any L2ARC device disappears, data is guaranteed to be present in the main pool. Compared to a mirrored all-flash storage solution, this ZFSSA auto-tiering HSP means that you get 2X the bang for your SSD dollar by avoiding mirroring of SSDs and with ZFS compression that becomes easily 4X or more. This great performance comes along with the simplicity of storing all of your data, hot, warm or cold, into this incredibly versatile high performance and cost effective ZFS based storage pool.


1It should be noted that ZFSSA tracks L2ARC eviction as "Cache: ARC evicted bytes per second broken down by L2ARC state", with subcategories of "cached," "uncached ineligible," and "uncached eligible." Having this last one at 0 implies a perfect L2ARC capture.

2For non-americans, this famous baseball coach is quoted to have said, "It's tough to make predictions, especially about the future."

April 17, 2015

Marcel Hofstetter#VDCF Release 5.5

April 17, 2015 08:07 GMT
Today we released the Version 5.5 of VDCF. VDCF is a Command Line based Management Tool
for Oracle Solaris. It provides a higher level API for System Administators.
Makes Deployment, Management, Migration and Monitoring of Solaris, LDoms and Zones
much easier and faster. You don't need deep knowledge of Solaris to use it.

Btw VDCF is a Swiss quality Software and a very stable product. Version 1.0 was released 2006!
We will continue to add more and more features ...


There is a free trial version available.
You find all docs and download at: http://www.jomasoft.ch/vdcf/

April 16, 2015

OpenStackOpenStack Swift on Oracle Solaris

April 16, 2015 23:52 GMT

Jim Kremer has written a blog about the OpenStack object storage service Swift and how to set it up on Oracle Solaris. For Swift on Solaris we use the ZFS file system as the underlying storage, which means we can take advantage of things like snapshots and clones, data encryption and compression, and the underlying redundancy that the ZFS architecture provides with storage pools and mirroring.

Read Jim's blog on How to get Swift up and running on Solaris.

-- Glynn Foster

April 15, 2015

Darren MoffatOpenSSH sftp(1) 'ls -l' vs 'ls -lh' and uid/gid translation

April 15, 2015 13:00 GMT
I had some one recently question why with OpenSSH using sftp(1) when the do 'ls -l' the get username/groupname in the output but when the do 'ls -lh' the file sizes are translated into SI units but the output now shows the uid/gid. It took myself and another engineer a while to work through this so I thought I would blog the explanation for what is going on.

The protocol used by sftp isn't actually an IETF standard.  OpenSSH (and SunSSH) uses this document:

https://filezilla-project.org/specs/draft-ietf-secsh-filexfer-02.txt [This is actually protocol version 3]

In that version of the draft there was a 'longname' field in the SSH_FXP_NAME response.  The standard explicitly says:

    The SSH_FXP_NAME response has the following format:

       uint32     id
       uint32     count
       repeats count times:
           string     filename
           string     longname
           ATTRS      attrs

...

   The format of the `longname' field is unspecified by this protocol.
   It MUST be suitable for use in the output of a directory listing
   command (in fact, the recommended operation for a directory listing
   command is to simply display this data).  However, clients SHOULD NOT
   attempt to parse the longname field for file attributes; they SHOULD
   use the attrs field instead.

When you do 'ls -l' the sftp client is displaying longname so it is the server that created that.  The longname is generated on the server and looks like the output of 'ls -l', the uid/gid to username/groupname translation was done on the server side.

When you add in '-h' the sftp client is obeying the draft standard and not parsing the longname field because it has to pretty print the size into SI units.  So it must just display the uid/gid.

The OpenSSH code explicitly does not attempt to translate the uid/gid because it has no way of knowing if the nameservice domain on the remote and local sides is the same.  This is why when you do 'lls -lh' you do get SI units and translated names but when you do 'ls -l' you get untranslated names.

In the very next version of the draft:

https://filezilla-project.org/specs/draft-ietf-secsh-filexfer-03.txt [ Protocol version 4]

The format of SSH_FXP_NAME and importantly ATTRS changes very significantly.  The longname is dropped and the ATTRS no longer has a UNIX uid/gid but an NFSv4 owner/group.

OpenSSH never implemented anything past the -02 draft.  Attempts at standardising the SFTP protocol eventually stopped at the -13 draft (which was protocol version 6) in 2006.

The proftpd server also has the ability to do the sftp protocol and it implements the -13 draft.  However Solaris 11 doesn't currently ship with the mod_sftp module, and even if it did the /usr/bin/sftp client doesn't talk that protocol version so an alternate client would be needed too.

April 14, 2015

Security BlogApril 2015 Critical Patch Update Released

April 14, 2015 20:17 GMT

Hello, this is Eric Maurice.

Oracle today released the April 2015 Critical Patch Update. The predictable nature of the Critical Patch Update program is intended to provide customers the ability to plan for the application of security fixes across all Oracle products. Critical Patch Updates are released quarterly in the months of January, April, July, and October. Unfortunately, Oracle continues to periodically receive reports of active exploitation of vulnerabilities that have already been fixed by Oracle in previous Critical Patch Update releases. In some instances, malicious attacks have been successful because customers failed to apply Critical Patch Updates. The “Critical” in the designation of the Critical Patch Update program is intended to highlight the importance of the fixes distributed through the program. Oracle highly recommends that customers apply these Critical Patch Updates as soon as possible. Note that Critical Patch Updates are cumulative for most Oracle products. As a result, the application of the most recent Critical Patch Update brings customers to the most recent security release, and addresses all previously-addressed security flaws for these products. The Critical Patch Update release schedule for the next 12 calendar months is published on Oracle’s Critical Patch Updates, Security Alerts and Third Party Bulletin page on Oracle.com.

The April 2015 Critical Patch Update provides 98 new fixes for security issues across a wide range of product families including: Oracle Database, Oracle Fusion Middleware, Oracle Hyperion, Oracle Enterprise Manager, Oracle E-Business Suite, Oracle Supply Chain Suite, Oracle PeopleSoft Enterprise, Oracle JDEdwards EnterpriseOne, Oracle Siebel CRM, Oracle Industry Applications, Oracle Java SE, Oracle Sun Systems Products Suite, Oracle MySQL, and Oracle Support Tools.

Out of these 98 new fixes, 4 are for the Oracle Database. None of the database vulnerabilities are remotely exploitable without authentication. The most severe of the database vulnerabilities (CVE-2015-0457) has received a CVSS Base Score 9.0 only for Windows for Database versions prior to 12c. This Base Score is 6.5 for Database 12c on Windows and for all versions of Database on Linux, Unix and other platforms. This vulnerability is related to the presence of the Java Virtual Machine in the database.

17 of the vulnerabilities fixed in this Critical Patch Update are for Oracle Fusion Middleware. 12 of these Fusion Middleware vulnerabilities are remotely exploitable without authentication, and the highest reported CVSSS Base Score is 10.0. This CVSS10.0 Base Score is for CVE-2015-0235 (a.k.a. GHOST which affects the GNU libc library) affecting the Oracle Exalogic Infrastructure.

This Critical Patch Update also delivers 14 new security fixes for Oracle Java SE. 11 of these Java SE fixes are for client-only (i.e., these vulnerabilities can be exploited only through sandboxed Java Web Start applications and sandboxed Java applets). Two apply to JSSE client and Server deployments and 1 to Java client and Server deployments. The Highest CVSS Base Score reported for these vulnerabilities is 10.0 and this score applies to 3 of the Java vulnerabilities (CVE-2015-0469, CVE-2015-0459, and CVE-2015-0491).

For Oracle Applications, this Critical Patch Update provides 4 new fixes for Oracle E-Business Suite , 7 for Oracle Supply Chain Suite, 6 for Oracle PeopleSoft Enterprise, 1 for Oracle JDEdwards EnterpriseOne, 1 for Oracle Siebel CRM, 2 for the Oracle Commerce Platform, and 2 for Oracle Retail Industry Suite, and 1 for Oracle Health Sciences Applications.

Finally, this Critical Patch Update provides 26 new fixes for Oracle MySQL. 4 of the MySQL vulnerabilities are remotely exploitable without authentication and the maximum CVSS Base Score for the MySQL vulnerabilities is 10.0.

As stated at the beginning of this blog, Oracle recommends that customers consistently apply Critical Patch Update as soon as possible. The security fixes provided through the Critical Patch Update program are thoroughly tested to ensure that they do not introduce regressions across the Oracle stack. Extensive documentation is available on the My Oracle Support Site and customers are encouraged to contact Oracle Support if they have questions about how to best deploy the fixes provided through the Critical Patch Update program.

For More Information:

The April 2015 Critical Patch Update advisory is located at http://www.oracle.com/technetwork/topics/security/cpuapr2015-2365600.html

The Critical Patch Updates, Security Alerts and Third Party Bulletin page is located at http://www.oracle.com/technetwork/topics/security/alerts-086861.html

The Oracle Software Security Assurance web site is located at http://www.oracle.com/us/support/assurance/overview/index.html. Oracle’s vulnerability handling policies and practices are described at http://www.oracle.com/us/support/assurance/vulnerability-remediation/introduction/index.html

Garrett D'Amorevtprint - blast from the past

April 14, 2015 18:13 GMT
I recently decided to have a look back at some old stuff I wrote.  Blast from the past stuff.  One of the things I decided to look at was the very first open source program I wrote -- something called vtprint.

vtprint is a tool that borrowed ideas stolen from the pine mail program.  This comes from the days of serial dialups, and legacy (Kermit or ProComm for those who remember such things) serial connectivity over 14.4K modems.  Printing a file on a remote server was hard back then; you could transfer the file using xmodem using rx(1), or its "newer" variants, rb(1) or rz(1), but this was awkward.  It turns out that most physical terminals had support for an escape sequence that would start hardcopy to a locally attached printer, and another that would stop it.  And indeed, many terminal emulators have the same support.  (I added support for this myself to the rxvt(1) terminal emulator back in the mid 90's, so any emulators derived from rxvt inherit this support as well.)

So vtprint, which in retrospect could have been written in a few lines of shell code, is a C program.  It supports configuration via a custom file called "vtprintcap", that can provide information about different $TERM values and the escape sequences they use.  So for example, TVI925 terminals use a different escape sequence than VT100 or VT220 terminals.

The first version of it, 1.0, is lost in time, and was written back in 1991 or 1992 while I was working as a "Computer Consultant" (fancy name for BOFH, though I also did a lot of programming for the school's Windows 3.1 network and Novell NetWare 3.12 server) at the Haas School of Business at UC Berkeley.  (I was a Chemical Engineering student there, and I flunked out in thermodynamics -- I still blame Gibbs Free Energy ΔG as a concept I never truly could get my mind around.  It took a couple of years before I finally gave up fighting "hard engineering" and accepted my true calling as a software engineer.)

Anyway, I transferred to SDSU into the Computer Science department.  While there, back in 1993, I updated vtprint significantly, and that is the release that lives on at SourceForge.

Today I imported that project to github.  (And let me tell you, it took some hoops to do this.  Apparently all the old CVS projects that were going to have converted to CVS are supposed to already have done so, because at this point even the conversion tools are mostly crufty and broken.  Maybe I'll post what I did there.)  Most of the files indicate an age of 11 years ago.  That's because 11 years ago I imported the source into CVS for the benefit of SourceForge.  The actual files date back to 1994, and you can even see my old email address -- which hasn't worked for a couple of decades, in the files.

So, I gave vtprint a whirl today for the first time in years:

garrett@hipster{4}> ./vtprint README
*** vtprint (v2.0.2) ***
Copyright 1993-1994, Garrett D'AmoreLast revised October 25, 1994.
NO WARRANTY!  Use "vtprint -w" for info.Freely redistributable.  Use "vtprint -l" for info.
vtprint: Can't open /usr/local/lib/vtprint/vtprintcap, using builtin codes.vtprint: Using <stdout> for output.vtprint: Output flags: formfeedvtprint: Printed README.vtprint: Successfully printed 1 file (1 specified).

Sadly, MacOS X Terminal.app does not appear to emulate the escape sequences properly.  But iTerm2 does so very nicely.  It even generates a nice print preview using the standard MacOS X print dialog.  Sadly, it does not pass through PostScript unmolested, but for printing ASCII files it works beautifully.

I think that I'm going to start using vtprint again, when I need to print a file located on a virtual machine, or over a connection that only offers SSH (such as a secured access machine).  With firewalled networks, I suspect that this program has new usefulness.  I'm also switching from Terminal.app (as many people have suggested) to iTerm2.

April 10, 2015

Jeff SavitOracle VM Server for SPARC 3.2 - Enhanced Virtual Disk Multipathing

April 10, 2015 16:11 GMT
Last month, Oracle released Oracle VM Server for SPARC release 3.2 which includes numerous enhancements. One of these is improvement for virtual disk multipathing, which provides redundant paths to virtual disk so that disk access continues even if a path or service domain fails.

Multipath groups are arranged in an active/standby pair of connections to the same physical media. In case of a path or service domain failure, I/O activity continues on a surviving path. This is also helpful for rolling upgrades: a service domain can be rebooted for an upgrade, and virtual disk I/O continues without interruption. That's important for continuous availability while upgrading system software.

A previous limitation was that you could not determine by commands which path was active, and you couldn't force activity onto a selected path. That meant that all the I/O for multiple virtual disks went (typically) to the primary path instead of being load balanced across service domains and HBAs. You could deduce which service domains were actively doing disk I/O by using commands like iostat, but there was no visibility, and no way to spread the load. Oracle VM Server for SPARC addresses this by adding command output that shows which path is active, and let you switch the active path to one of the available paths. Now, the command 'ldm list-bindings' shows which path is active, and the command 'ldm set-vdisk' lets you set which path is active. For further details and syntax, please see the documentation at Configuring Virtual Disk Multipathing

April 09, 2015

OpenStackOracle at OpenStack Summit in Vancouver - May 18-22

April 09, 2015 21:42 GMT

Oracle is premier sponsor at OpenStack Summit in Vancouver, May 18-22. This year we will have experts from all of Oracle's OpenStack technologies including Oracle Linux and Oracle VM, Oracle Solaris, Oracle ZFS Storage Appliance, and Oracle Tape Storage Solutions. We will have informative sessions and booth to visit. Here's one of the Oracle sessions:

Title:Making OpenStack secure and compliant for the enterprise

Many Enterprises deploying OpenStack also need to meet Security and Compliance requirements. In this talk, you will learn how Oracle can help you address these requirements with OpenStack Cloud Infrastructure solutions designed to meet the needs of the Enterprise. Come learn how Oracle can help you deploy OpenStack solutions that you can trust to meet the needs of your enterprise, your customers, and the demands of mission-critical cloud services.

Tuesday, May 19 from 2:50 p.m. to 3:30 p.m., Room 116 / 117

We encourage you to visit the Oracle Booth # P9 for discussion with our OpenStack experts on your requirements and how best to adress your issues for smooth deployment. Marketplace hours and demos will be done on: 

Hope to meet you at OpenStack Summit!  

April 06, 2015

The Wonders of ZFS StorageIs that purpose-built backup appliance really the best for your database backup and restore?

April 06, 2015 19:18 GMT

Data protection is a critical component of the job if you have anything to do with IT operations. Regardless of your job title, database administrators and storage administrators alike, know how critical it is to plan for any type of systems outage. Backup and restore are key components of a data protection and preparing for a disaster recovery event. The storage architecture is a key element in the overall enterprise IT environment, and when focusing on the implementation of disk storage for backup and recovery, there isn’t just one platform to do the task. And if you are looking at database backups, only Oracle can provide a better database backup platform than storage behemoth EMC.

You can hear all about the benefits of the Oracle ZFS Storage Appliance from an Oracle sales rep, but having an outside, independent analyst break down the appliance is even better. In a recent Competitive Advantage report Jerome Wendt, lead analyst and president at DCIG compared Oracle ZFS Storage ZS4-4 to EMC Data Domain 990. His conclusion: “By using Oracle’s ZS4-4, enterprises for the first time have a highly-availability, highly scalable solution that is specifically co-engineered with Oracle Database to accelerate backups while minimizing storage costs and capacity.”

The report cites a number of advantages that ZS4-4 has over EMC DD990:

Higher backup and restore throughput:

ZS4-4 can achieve native backup throughput speeds of up to 30TB/hour and restore to 40TB/hour. EMC DD990 only achieves a maximum 15TB/hour backup rate with no published restore rates available.

Ample capacity to store enterprise Oracle Database backups:

ZS4-4 scales up to 3.5PB. DD990 only scales to a maximum of 570TB.

Higher data reduction rates than deduplication can achieve:

The ZS4-4 supports Hybrid Columnar Compression (HCC) so it can achieve compres­sion rates of up to 50x and overall storage capacity reduction of up to 40%. HCC is not available with non-Oracle storage. DD990 cannot deliver these high Oracle Database data reduction rates..

Accelerates backup and recovery throughput:

Oracle Intelligent Storage Protocol reduces manual tuning, expedites backups with ZS4-4 and is not available with the EMC DD990.

Higher availability:

ZS4-4 is available in an optional dual-controller configuration for high avail­ability. The EMC DD990 is only available in a single controller configuration.

More flexible:

ZS4-4 is a multi-function system as opposed to DD990, a single purpose deduplicating backup appliance.

In conclusion:

“The Oracle ZFS Storage ZS4-4 provides enterprises the flexibility they need to quickly and efficiently protect their Oracle databases. Shipping in a highly available configuration, scaling up to 3.5PBs and providing multiple storage networking protocols with high throughput rates, it combines these features with Oracle Database’s native HCC and OISP features to provide enterprises with a solution that is not depen­dent upon a “one size fits all,” single-function appliance such as the EMC DD990.”

To find out more about how DCIG came about their conclusions, take a look for yourself: www.oracle.com/us/corporate/analystreports/dcig-zs44-odb-vs-emc-dd990-2504515.pdf.

If you have any responsibility for the backup and restore of an Oracle database, you owe it to yourself to learn more about how the Oracle ZS Storage Appliance delivers better performance to ensure your critical databases can be restored when everything is on the line.

Worried about managing your own storage? Oracle has made managing the ZFS Storage Appliance easy for the DBA by integrating appliance management into Oracle Enterprise Manager. Learn more on storage management in this short video: https://www.youtube.com/watch?v=JzmGKKXsuSw.

The Wonders of ZFS StorageLights, Camera, and… Action with Oracle ZFS Storage Appliance!

April 06, 2015 16:00 GMT

Inspired by the recent Oscars award ceremony, I decided to try my hand at directing and here is my work in progress script, featuring our star – Oracle ZFS Storage Appliance.

Location: Editing Studio

Scene1:

The on-location camera captured some amazing shots! We have rows and rows of creative minds sifting through raw media files and trying to make sense of the content. Where do these media files live – Oracle ZFS Storage Appliance!

While they finalize the shots, in another part of the studio, VFX artists create stunning visual effects by connecting to the render farm with thousands of servers, which are again supported by our storage star!

A ZS4-4 system delivers low latency read performance - easily handling millions of DPX files, while accessed by thousands of render farm servers and seamlessly delivering content to interactive digital artists. The ZS4-4 posted an SPC-2 result of 31,486.23 SPC-2 MBPSTM throughput, with streaming media performance at 36,175.91 SPC-2 MBPSTM.

Scene2:

Meanwhile, in a futuristic looking IT center, the studio system administrators are engaged in deep discussion on how efficiently they can configure the ZFS shares to deliver optimal performance. They just received information that the on-going project has a variety of workload characteristics – the movie was filmed in 4K , but the VFX will be done in 2K and later scaled up to conform to the original format. Not to worry, they can continue planning their summer vacation as Oracle ZFS Storage is there to save the day.

ZFS storage supports adjustable record size (8KB to 1MB), enabling VFX designers to choose large block sizes for efficiently handling 2K, 4K and high frame rate (3D) digital media workloads.

Scene3:

A studio system administrator is about to break for lunch, when he receives a call from the VFX division. A VFX artist suddenly observes a drop in performance, affecting the animation sequence she was working on. The studio system admin opens up the ZFS Storage’s GUI and goes to Analytics section. Within minutes the hidden problem is discovered and necessary action is taken to address the performance issue. Another employee of the month award to add to his office wall!

Advanced Storage Analytics in ZFS Storage provide deep insights into workload characteristics, offering tremendous manageability advantages and 73% faster troubleshooting vs. NetApp (Strategic Focus Group Usability Comparison Study).

Scene4:

The IT bigwigs of the studio decide in a fiscal planning meeting: we need a storage solution that delivers the highest efficiency and performance for the next big movie project. Who do we call?

Oracle ZFS Storage delivers high performance without a high price tag. Oracle’s high-end model, Oracle ZFS Storage ZS4-4 posted an SPC-2 result of 31,486.23 SPC-2 MBPSTM throughput at $17.09 SPC-2 Price-PerformanceTM delivering “million dollar performance at half the price,” while Oracle’s midrange model, Oracle ZFS Storage ZS3-2, achieved 16,212.66 SPC-2 MBPSTM with overall #1 ranking in price-performance at $12.08 SPC-2 Price-PerformanceTM (as of March 17, 2015).

Scene5:

It is almost close of business at the editing studio. The VFX artists want to backup their files and protect the creative assets they have been working on. Oracle ZFS Storage Appliance to the rescue again!

Low latency write performance enables fast archiving while strong AES 256-bit data-at-rest encryption combined with a two-tier key architecture offer highly secure, granular and easy to implement storage level encryption for rich media assets.

THE END

Watch out for the Oracle booth at NAB show this year. If you want an up close and personal look at our star, Oracle ZFS Storage Appliance and know more about its super powers, visit Oracle Stand: SU2202 (South Upper Hall), Las Vegas Convention Center, April 13-16, 2015. 

March 31, 2015

Darryl GoveUsing the Solaris Studio IDE for remote development

March 31, 2015 19:35 GMT

Solaris Studio has an IDE based on NetBeans.org, one of the features of the IDE is its ability to do remote development - ie do your development work on a Windows laptop while doing the builds on a remote Solaris or Linux box. Vladimir has written up a nice how-to guide covering the three models that Studio supports:

As shown in the image, the three models are:

Marcel HofstetterLDom Networking using SR-IOV (VF) or vnet?

March 31, 2015 15:30 GMT
The SPARC/LDoms technology offers two options when setting up ethernet network configurations.
One option is Virtual Networking using vnet, which supports Live Migration, while the performance overhead is minimal since the current LDoms Version 3.2. This is why I would recommend using vnet. Especially when your LDoms are sitting on the same System/VSW and communication stays local. The vnet communication is very fast then and doesn't use the network card at all. Using uperf between 2 small LDoms with 1 core assigned each the throughput reaches more than 7Gb/s on a SPARC T4-1 server.

SR-IOV (Ethernet Virtual Function) is available with new ethernet cards. Older cards don't support this technology. The LDoms Version 3.2 released this month removes the old drawback of LDom panics when the control domains reboots. SR-IOV/VF offers a small performance benefit over vnet, while losing the ability of LDom (Live) Migration. For Apps which require lowest latency the SR-IOV Virtual Functions are the right choice.

What's New in LDoms 3.2

March 27, 2015

Robert MilkowskiTriton: Better Docker

March 27, 2015 20:07 GMT
Bryan on Triton.

March 26, 2015

Joerg MoellenkampOracle Business Breakfast "Netzwerktechnik" am 27.3. findet nicht statt.

March 26, 2015 19:48 GMT
Wir haben den Gästen, die sich angemeldet haben, die Nachricht schon vorgestern geschickt, aber hier noch mal für alle, die morgen einfach hätten vorbeischauen wollen: Die Urlaubs- und Krankheitssituation ist wohl momentan derart angespannt (das haben wir völlig unterschätzt, das das auch in Vorwoche der Karfreitagswoche sich bereits so stark auswirken würde), das viele der "Frequent Breakfaster" und eine Reihe von Erstbesuchern nicht die Möglichkeit hatten, trotz Interesse zuzusagen. Wir haben uns daher entschlossen, wegen sehr wenigen Anmeldungen das Breakfast morgen abzusagen.

March 25, 2015

Darryl GoveCommunity redesign...

March 25, 2015 16:16 GMT

Rick has a nice post about the changes to the Oracle community pages. Useful quick read.