April 17, 2015

Marcel Hofstetter#VDCF Release 5.5

April 17, 2015 08:07 GMT
Today we released the Version 5.5 of VDCF. VDCF is a Command Line based Management Tool
for Oracle Solaris. It provides a higher level API for System Administators.
Makes Deployment, Management, Migration and Monitoring of Solaris, LDoms and Zones
much easier and faster. You don't need deep knowledge of Solaris to use it.

Btw VDCF is a Swiss quality Software and a very stable product. Version 1.0 was released 2006!
We will continue to add more and more features ...


There is a free trial version available.
You find all docs and download at: http://www.jomasoft.ch/vdcf/

April 16, 2015

OpenStackOpenStack Swift on Oracle Solaris

April 16, 2015 23:52 GMT

Jim Kremer has written a blog about the OpenStack object storage service Swift and how to set it up on Oracle Solaris. For Swift on Solaris we use the ZFS file system as the underlying storage, which means we can take advantage of things like snapshots and clones, data encryption and compression, and the underlying redundancy that the ZFS architecture provides with storage pools and mirroring.

Read Jim's blog on How to get Swift up and running on Solaris.

-- Glynn Foster

April 15, 2015

Darren MoffatOpenSSH sftp(1) 'ls -l' vs 'ls -lh' and uid/gid translation

April 15, 2015 13:00 GMT
I had some one recently question why with OpenSSH using sftp(1) when the do 'ls -l' the get username/groupname in the output but when the do 'ls -lh' the file sizes are translated into SI units but the output now shows the uid/gid. It took myself and another engineer a while to work through this so I thought I would blog the explanation for what is going on.

The protocol used by sftp isn't actually an IETF standard.  OpenSSH (and SunSSH) uses this document:

https://filezilla-project.org/specs/draft-ietf-secsh-filexfer-02.txt [This is actually protocol version 3]

In that version of the draft there was a 'longname' field in the SSH_FXP_NAME response.  The standard explicitly says:

    The SSH_FXP_NAME response has the following format:

       uint32     id
       uint32     count
       repeats count times:
           string     filename
           string     longname
           ATTRS      attrs

...

   The format of the `longname' field is unspecified by this protocol.
   It MUST be suitable for use in the output of a directory listing
   command (in fact, the recommended operation for a directory listing
   command is to simply display this data).  However, clients SHOULD NOT
   attempt to parse the longname field for file attributes; they SHOULD
   use the attrs field instead.

When you do 'ls -l' the sftp client is displaying longname so it is the server that created that.  The longname is generated on the server and looks like the output of 'ls -l', the uid/gid to username/groupname translation was done on the server side.

When you add in '-h' the sftp client is obeying the draft standard and not parsing the longname field because it has to pretty print the size into SI units.  So it must just display the uid/gid.

The OpenSSH code explicitly does not attempt to translate the uid/gid because it has no way of knowing if the nameservice domain on the remote and local sides is the same.  This is why when you do 'lls -lh' you do get SI units and translated names but when you do 'ls -l' you get untranslated names.

In the very next version of the draft:

https://filezilla-project.org/specs/draft-ietf-secsh-filexfer-03.txt [ Protocol version 4]

The format of SSH_FXP_NAME and importantly ATTRS changes very significantly.  The longname is dropped and the ATTRS no longer has a UNIX uid/gid but an NFSv4 owner/group.

OpenSSH never implemented anything past the -02 draft.  Attempts at standardising the SFTP protocol eventually stopped at the -13 draft (which was protocol version 6) in 2006.

The proftpd server also has the ability to do the sftp protocol and it implements the -13 draft.  However Solaris 11 doesn't currently ship with the mod_sftp module, and even if it did the /usr/bin/sftp client doesn't talk that protocol version so an alternate client would be needed too.

April 14, 2015

Security BlogApril 2015 Critical Patch Update Released

April 14, 2015 20:17 GMT

Hello, this is Eric Maurice.

Oracle today released the April 2015 Critical Patch Update. The predictable nature of the Critical Patch Update program is intended to provide customers the ability to plan for the application of security fixes across all Oracle products. Critical Patch Updates are released quarterly in the months of January, April, July, and October. Unfortunately, Oracle continues to periodically receive reports of active exploitation of vulnerabilities that have already been fixed by Oracle in previous Critical Patch Update releases. In some instances, malicious attacks have been successful because customers failed to apply Critical Patch Updates. The “Critical” in the designation of the Critical Patch Update program is intended to highlight the importance of the fixes distributed through the program. Oracle highly recommends that customers apply these Critical Patch Updates as soon as possible. Note that Critical Patch Updates are cumulative for most Oracle products. As a result, the application of the most recent Critical Patch Update brings customers to the most recent security release, and addresses all previously-addressed security flaws for these products. The Critical Patch Update release schedule for the next 12 calendar months is published on Oracle’s Critical Patch Updates, Security Alerts and Third Party Bulletin page on Oracle.com.

The April 2015 Critical Patch Update provides 98 new fixes for security issues across a wide range of product families including: Oracle Database, Oracle Fusion Middleware, Oracle Hyperion, Oracle Enterprise Manager, Oracle E-Business Suite, Oracle Supply Chain Suite, Oracle PeopleSoft Enterprise, Oracle JDEdwards EnterpriseOne, Oracle Siebel CRM, Oracle Industry Applications, Oracle Java SE, Oracle Sun Systems Products Suite, Oracle MySQL, and Oracle Support Tools.

Out of these 98 new fixes, 4 are for the Oracle Database. None of the database vulnerabilities are remotely exploitable without authentication. The most severe of the database vulnerabilities (CVE-2015-0457) has received a CVSS Base Score 9.0 only for Windows for Database versions prior to 12c. This Base Score is 6.5 for Database 12c on Windows and for all versions of Database on Linux, Unix and other platforms. This vulnerability is related to the presence of the Java Virtual Machine in the database.

17 of the vulnerabilities fixed in this Critical Patch Update are for Oracle Fusion Middleware. 12 of these Fusion Middleware vulnerabilities are remotely exploitable without authentication, and the highest reported CVSSS Base Score is 10.0. This CVSS10.0 Base Score is for CVE-2015-0235 (a.k.a. GHOST which affects the GNU libc library) affecting the Oracle Exalogic Infrastructure.

This Critical Patch Update also delivers 14 new security fixes for Oracle Java SE. 11 of these Java SE fixes are for client-only (i.e., these vulnerabilities can be exploited only through sandboxed Java Web Start applications and sandboxed Java applets). Two apply to JSSE client and Server deployments and 1 to Java client and Server deployments. The Highest CVSS Base Score reported for these vulnerabilities is 10.0 and this score applies to 3 of the Java vulnerabilities (CVE-2015-0469, CVE-2015-0459, and CVE-2015-0491).

For Oracle Applications, this Critical Patch Update provides 4 new fixes for Oracle E-Business Suite , 7 for Oracle Supply Chain Suite, 6 for Oracle PeopleSoft Enterprise, 1 for Oracle JDEdwards EnterpriseOne, 1 for Oracle Siebel CRM, 2 for the Oracle Commerce Platform, and 2 for Oracle Retail Industry Suite, and 1 for Oracle Health Sciences Applications.

Finally, this Critical Patch Update provides 26 new fixes for Oracle MySQL. 4 of the MySQL vulnerabilities are remotely exploitable without authentication and the maximum CVSS Base Score for the MySQL vulnerabilities is 10.0.

As stated at the beginning of this blog, Oracle recommends that customers consistently apply Critical Patch Update as soon as possible. The security fixes provided through the Critical Patch Update program are thoroughly tested to ensure that they do not introduce regressions across the Oracle stack. Extensive documentation is available on the My Oracle Support Site and customers are encouraged to contact Oracle Support if they have questions about how to best deploy the fixes provided through the Critical Patch Update program.

For More Information:

The April 2015 Critical Patch Update advisory is located at http://www.oracle.com/technetwork/topics/security/cpuapr2015-2365600.html

The Critical Patch Updates, Security Alerts and Third Party Bulletin page is located at http://www.oracle.com/technetwork/topics/security/alerts-086861.html

The Oracle Software Security Assurance web site is located at http://www.oracle.com/us/support/assurance/overview/index.html. Oracle’s vulnerability handling policies and practices are described at http://www.oracle.com/us/support/assurance/vulnerability-remediation/introduction/index.html

Garrett D'Amorevtprint - blast from the past

April 14, 2015 18:13 GMT
I recently decided to have a look back at some old stuff I wrote.  Blast from the past stuff.  One of the things I decided to look at was the very first open source program I wrote -- something called vtprint.

vtprint is a tool that borrowed ideas stolen from the pine mail program.  This comes from the days of serial dialups, and legacy (Kermit or ProComm for those who remember such things) serial connectivity over 14.4K modems.  Printing a file on a remote server was hard back then; you could transfer the file using xmodem using rx(1), or its "newer" variants, rb(1) or rz(1), but this was awkward.  It turns out that most physical terminals had support for an escape sequence that would start hardcopy to a locally attached printer, and another that would stop it.  And indeed, many terminal emulators have the same support.  (I added support for this myself to the rxvt(1) terminal emulator back in the mid 90's, so any emulators derived from rxvt inherit this support as well.)

So vtprint, which in retrospect could have been written in a few lines of shell code, is a C program.  It supports configuration via a custom file called "vtprintcap", that can provide information about different $TERM values and the escape sequences they use.  So for example, TVI925 terminals use a different escape sequence than VT100 or VT220 terminals.

The first version of it, 1.0, is lost in time, and was written back in 1991 or 1992 while I was working as a "Computer Consultant" (fancy name for BOFH, though I also did a lot of programming for the school's Windows 3.1 network and Novell NetWare 3.12 server) at the Haas School of Business at UC Berkeley.  (I was a Chemical Engineering student there, and I flunked out in thermodynamics -- I still blame Gibbs Free Energy ΔG as a concept I never truly could get my mind around.  It took a couple of years before I finally gave up fighting "hard engineering" and accepted my true calling as a software engineer.)

Anyway, I transferred to SDSU into the Computer Science department.  While there, back in 1993, I updated vtprint significantly, and that is the release that lives on at SourceForge.

Today I imported that project to github.  (And let me tell you, it took some hoops to do this.  Apparently all the old CVS projects that were going to have converted to CVS are supposed to already have done so, because at this point even the conversion tools are mostly crufty and broken.  Maybe I'll post what I did there.)  Most of the files indicate an age of 11 years ago.  That's because 11 years ago I imported the source into CVS for the benefit of SourceForge.  The actual files date back to 1994, and you can even see my old email address -- which hasn't worked for a couple of decades, in the files.

So, I gave vtprint a whirl today for the first time in years:

garrett@hipster{4}> ./vtprint README
*** vtprint (v2.0.2) ***
Copyright 1993-1994, Garrett D'AmoreLast revised October 25, 1994.
NO WARRANTY!  Use "vtprint -w" for info.Freely redistributable.  Use "vtprint -l" for info.
vtprint: Can't open /usr/local/lib/vtprint/vtprintcap, using builtin codes.vtprint: Using <stdout> for output.vtprint: Output flags: formfeedvtprint: Printed README.vtprint: Successfully printed 1 file (1 specified).

Sadly, MacOS X Terminal.app does not appear to emulate the escape sequences properly.  But iTerm2 does so very nicely.  It even generates a nice print preview using the standard MacOS X print dialog.  Sadly, it does not pass through PostScript unmolested, but for printing ASCII files it works beautifully.

I think that I'm going to start using vtprint again, when I need to print a file located on a virtual machine, or over a connection that only offers SSH (such as a secured access machine).  With firewalled networks, I suspect that this program has new usefulness.  I'm also switching from Terminal.app (as many people have suggested) to iTerm2.

April 10, 2015

Jeff SavitOracle VM Server for SPARC 3.2 - Enhanced Virtual Disk Multipathing

April 10, 2015 16:11 GMT
Last month, Oracle released Oracle VM Server for SPARC release 3.2 which includes numerous enhancements. One of these is improvement for virtual disk multipathing, which provides redundant paths to virtual disk so that disk access continues even if a path or service domain fails.

Multipath groups are arranged in an active/standby pair of connections to the same physical media. In case of a path or service domain failure, I/O activity continues on a surviving path. This is also helpful for rolling upgrades: a service domain can be rebooted for an upgrade, and virtual disk I/O continues without interruption. That's important for continuous availability while upgrading system software.

A previous limitation was that you could not determine by commands which path was active, and you couldn't force activity onto a selected path. That meant that all the I/O for multiple virtual disks went (typically) to the primary path instead of being load balanced across service domains and HBAs. You could deduce which service domains were actively doing disk I/O by using commands like iostat, but there was no visibility, and no way to spread the load. Oracle VM Server for SPARC addresses this by adding command output that shows which path is active, and let you switch the active path to one of the available paths. Now, the command 'ldm list-bindings' shows which path is active, and the command 'ldm set-vdisk' lets you set which path is active. For further details and syntax, please see the documentation at Configuring Virtual Disk Multipathing

April 09, 2015

OpenStackOracle at OpenStack Summit in Vancouver - May 18-22

April 09, 2015 21:42 GMT

Oracle is premier sponsor at OpenStack Summit in Vancouver, May 18-22. This year we will have experts from Oracle's all OpenStack technologies including Oracle Linux and Oracle VM, Oracle Solaris, Oracle ZFS Storage Appliance, and Oracle Tape Storage Solutions. We will have informative sessions and booth to visit. Here's one of the Oracle sessions:

Session: Enabling Archive Storage With OpenStack Swift, Tuesday, May 19 from 2:50 p.m. to 3:30 p.m.

We encourage you to visit the Oracle Booth # P9 for discussion with our OpenStack experts on your requirements and how best to adress your issues for smooth deployment. Marketplace hours and demos will be done on: 

Hope to meet you at OpenStack Summit!  

April 06, 2015

The Wonders of ZFS StorageIs that purpose-built backup appliance really the best for your database backup and restore?

April 06, 2015 19:18 GMT

Data protection is a critical component of the job if you have anything to do with IT operations. Regardless of your job title, database administrators and storage administrators alike, know how critical it is to plan for any type of systems outage. Backup and restore are key components of a data protection and preparing for a disaster recovery event. The storage architecture is a key element in the overall enterprise IT environment, and when focusing on the implementation of disk storage for backup and recovery, there isn’t just one platform to do the task. And if you are looking at database backups, only Oracle can provide a better database backup platform than storage behemoth EMC.

You can hear all about the benefits of the Oracle ZFS Storage Appliance from an Oracle sales rep, but having an outside, independent analyst break down the appliance is even better. In a recent Competitive Advantage report Jerome Wendt, lead analyst and president at DCIG compared Oracle ZFS Storage ZS4-4 to EMC Data Domain 990. His conclusion: “By using Oracle’s ZS4-4, enterprises for the first time have a highly-availability, highly scalable solution that is specifically co-engineered with Oracle Database to accelerate backups while minimizing storage costs and capacity.”

The report cites a number of advantages that ZS4-4 has over EMC DD990:

Higher backup and restore throughput:

ZS4-4 can achieve native backup throughput speeds of up to 30TB/hour and restore to 40TB/hour. EMC DD990 only achieves a maximum 15TB/hour backup rate with no published restore rates available.

Ample capacity to store enterprise Oracle Database backups:

ZS4-4 scales up to 3.5PB. DD990 only scales to a maximum of 570TB.

Higher data reduction rates than deduplication can achieve:

The ZS4-4 supports Hybrid Columnar Compression (HCC) so it can achieve compres­sion rates of up to 50x and overall storage capacity reduction of up to 40%. HCC is not available with non-Oracle storage. DD990 cannot deliver these high Oracle Database data reduction rates..

Accelerates backup and recovery throughput:

Oracle Intelligent Storage Protocol reduces manual tuning, expedites backups with ZS4-4 and is not available with the EMC DD990.

Higher availability:

ZS4-4 is available in an optional dual-controller configuration for high avail­ability. The EMC DD990 is only available in a single controller configuration.

More flexible:

ZS4-4 is a multi-function system as opposed to DD990, a single purpose deduplicating backup appliance.

In conclusion:

“The Oracle ZFS Storage ZS4-4 provides enterprises the flexibility they need to quickly and efficiently protect their Oracle databases. Shipping in a highly available configuration, scaling up to 3.5PBs and providing multiple storage networking protocols with high throughput rates, it combines these features with Oracle Database’s native HCC and OISP features to provide enterprises with a solution that is not depen­dent upon a “one size fits all,” single-function appliance such as the EMC DD990.”

To find out more about how DCIG came about their conclusions, take a look for yourself: www.oracle.com/us/corporate/analystreports/dcig-zs44-odb-vs-emc-dd990-2504515.pdf.

If you have any responsibility for the backup and restore of an Oracle database, you owe it to yourself to learn more about how the Oracle ZS Storage Appliance delivers better performance to ensure your critical databases can be restored when everything is on the line.

Worried about managing your own storage? Oracle has made managing the ZFS Storage Appliance easy for the DBA by integrating appliance management into Oracle Enterprise Manager. Learn more on storage management in this short video: https://www.youtube.com/watch?v=JzmGKKXsuSw.

The Wonders of ZFS StorageLights, Camera, and… Action with Oracle ZFS Storage Appliance!

April 06, 2015 16:00 GMT

Inspired by the recent Oscars award ceremony, I decided to try my hand at directing and here is my work in progress script, featuring our star – Oracle ZFS Storage Appliance.

Location: Editing Studio

Scene1:

The on-location camera captured some amazing shots! We have rows and rows of creative minds sifting through raw media files and trying to make sense of the content. Where do these media files live – Oracle ZFS Storage Appliance!

While they finalize the shots, in another part of the studio, VFX artists create stunning visual effects by connecting to the render farm with thousands of servers, which are again supported by our storage star!

A ZS4-4 system delivers low latency read performance - easily handling millions of DPX files, while accessed by thousands of render farm servers and seamlessly delivering content to interactive digital artists. The ZS4-4 posted an SPC-2 result of 31,486.23 SPC-2 MBPSTM throughput, with streaming media performance at 36,175.91 SPC-2 MBPSTM.

Scene2:

Meanwhile, in a futuristic looking IT center, the studio system administrators are engaged in deep discussion on how efficiently they can configure the ZFS shares to deliver optimal performance. They just received information that the on-going project has a variety of workload characteristics – the movie was filmed in 4K , but the VFX will be done in 2K and later scaled up to conform to the original format. Not to worry, they can continue planning their summer vacation as Oracle ZFS Storage is there to save the day.

ZFS storage supports adjustable record size (8KB to 1MB), enabling VFX designers to choose large block sizes for efficiently handling 2K, 4K and high frame rate (3D) digital media workloads.

Scene3:

A studio system administrator is about to break for lunch, when he receives a call from the VFX division. A VFX artist suddenly observes a drop in performance, affecting the animation sequence she was working on. The studio system admin opens up the ZFS Storage’s GUI and goes to Analytics section. Within minutes the hidden problem is discovered and necessary action is taken to address the performance issue. Another employee of the month award to add to his office wall!

Advanced Storage Analytics in ZFS Storage provide deep insights into workload characteristics, offering tremendous manageability advantages and 73% faster troubleshooting vs. NetApp (Strategic Focus Group Usability Comparison Study).

Scene4:

The IT bigwigs of the studio decide in a fiscal planning meeting: we need a storage solution that delivers the highest efficiency and performance for the next big movie project. Who do we call?

Oracle ZFS Storage delivers high performance without a high price tag. Oracle’s high-end model, Oracle ZFS Storage ZS4-4 posted an SPC-2 result of 31,486.23 SPC-2 MBPSTM throughput at $17.09 SPC-2 Price-PerformanceTM delivering “million dollar performance at half the price,” while Oracle’s midrange model, Oracle ZFS Storage ZS3-2, achieved 16,212.66 SPC-2 MBPSTM with overall #1 ranking in price-performance at $12.08 SPC-2 Price-PerformanceTM (as of March 17, 2015).

Scene5:

It is almost close of business at the editing studio. The VFX artists want to backup their files and protect the creative assets they have been working on. Oracle ZFS Storage Appliance to the rescue again!

Low latency write performance enables fast archiving while strong AES 256-bit data-at-rest encryption combined with a two-tier key architecture offer highly secure, granular and easy to implement storage level encryption for rich media assets.

THE END

Watch out for the Oracle booth at NAB show this year. If you want an up close and personal look at our star, Oracle ZFS Storage Appliance and know more about its super powers, visit Oracle Stand: SU2202 (South Upper Hall), Las Vegas Convention Center, April 13-16, 2015. 

March 31, 2015

Darryl GoveUsing the Solaris Studio IDE for remote development

March 31, 2015 19:35 GMT

Solaris Studio has an IDE based on NetBeans.org, one of the features of the IDE is its ability to do remote development - ie do your development work on a Windows laptop while doing the builds on a remote Solaris or Linux box. Vladimir has written up a nice how-to guide covering the three models that Studio supports:

As shown in the image, the three models are:

Marcel HofstetterLDom Networking using SR-IOV (VF) or vnet?

March 31, 2015 15:30 GMT
The SPARC/LDoms technology offers two options when setting up ethernet network configurations.
One option is Virtual Networking using vnet, which supports Live Migration, while the performance overhead is minimal since the current LDoms Version 3.2. This is why I would recommend using vnet. Especially when your LDoms are sitting on the same System/VSW and communication stays local. The vnet communication is very fast then and doesn't use the network card at all. Using uperf between 2 small LDoms with 1 core assigned each the throughput reaches more than 7Gb/s on a SPARC T4-1 server.

SR-IOV (Ethernet Virtual Function) is available with new ethernet cards. Older cards don't support this technology. The LDoms Version 3.2 released this month removes the old drawback of LDom panics when the control domains reboots. SR-IOV/VF offers a small performance benefit over vnet, while losing the ability of LDom (Live) Migration. For Apps which require lowest latency the SR-IOV Virtual Functions are the right choice.

What's New in LDoms 3.2

March 27, 2015

Robert MilkowskiTriton: Better Docker

March 27, 2015 20:07 GMT
Bryan on Triton.

March 26, 2015

Joerg MoellenkampOracle Business Breakfast "Netzwerktechnik" am 27.3. findet nicht statt.

March 26, 2015 19:48 GMT
Wir haben den Gästen, die sich angemeldet haben, die Nachricht schon vorgestern geschickt, aber hier noch mal für alle, die morgen einfach hätten vorbeischauen wollen: Die Urlaubs- und Krankheitssituation ist wohl momentan derart angespannt (das haben wir völlig unterschätzt, das das auch in Vorwoche der Karfreitagswoche sich bereits so stark auswirken würde), das viele der "Frequent Breakfaster" und eine Reihe von Erstbesuchern nicht die Möglichkeit hatten, trotz Interesse zuzusagen. Wir haben uns daher entschlossen, wegen sehr wenigen Anmeldungen das Breakfast morgen abzusagen.

March 25, 2015

Darryl GoveCommunity redesign...

March 25, 2015 16:16 GMT

Rick has a nice post about the changes to the Oracle community pages. Useful quick read.

March 24, 2015

Darryl GoveBuilding xerces 2.8.0

March 24, 2015 19:17 GMT

Some notes on the building xerces 2.8.0 on Solaris. You can find the build instructions on the xerces site. But there's some changes that are needed to make it work with recent Studio compilers.

Build with:

$ ./runConfigure -p solaris -c cc -x CC -r pthread -b 64
$ ./configure
$ ./gmake

Hopefully this gives you sufficient information to build the 2.8.0 version of the library.

Update: On 19th March 2015 they removed the 2.8 branch of xerces. So the instructions are rather redundant now :)

Bryan CantrillTriton: Docker and the “best of all worlds”

March 24, 2015 17:06 GMT

When Docker first rocketed into the nerdosphere in 2013, some wondered how we at Joyent felt about its popularity. Having run OS containers in multi-tenant production for nearly a decade (and being one of the most vocal proponents of OS-based virtualization), did we somehow resent the relatively younger Docker? Some were surprised to learn that (to the contrary!) we have been elated to see the rise of Docker: we share with Docker a vision for a containerized future, and we love that Docker has brought the technology to a much broader audience — and via an entirely different vector (namely, emphasizing developer agility instead of merely operational efficiency). Given our enthusiasm, you can imagine the question we posed to ourselves over a year ago: could we somehow combine the operational strength of SmartOS containers with the engaging developer experience of Docker? Importantly, we had no desire to develop a “better” Docker — we merely wanted to use SmartOS and SmartDataCenter as a substrate upon which to deploy Docker containers directly onto the metal. Doing this would leverage over a decade of deep operating systems engineering with technologies like Crossbow, ZFS, DTrace and (of course) Zones — and would deliver all of the operational advantages of pure OS-based virtualization to Docker containers: performance, elasticity, security and density.

That said, there was an obvious hurdle: while designed to be cross-platform, Docker is a Linux-borne technology — and the repository of Docker images is today a collection of Linux binaries. While SmartOS is Unix, it (somewhat infamously) isn’t Linux: applications need to be at least recompiled (if not ported) to work on SmartOS. Into this gap came a fortuitous accident: David Mackay, a member of the illumos community, attempted to revive LX-branded zones, an old Sun project that provided Linux emulation in a zone. While this project had been very promising when it was first done years ago, it had also been restricted to emulating a 2.4 Linux kernel for 32-bit binaries — and it was clear at the time that modernizing it was going to be significant work. As a result, the work sat unattended in the system for a while before being unceremoniously ripped out in 2010. It seemed clear that with the passage of time, this work would hardly be revivable: it had been so long, any resurrection was going to be tantamount to a rewrite.

But fortunately, David didn’t ask us our opinion before he attempted to revive it — he just did it. (As an aside: a tremendous advantage of open source is that the community can perform experiments that you might deem too risky or too expensive in terms of opportunity cost!) When David reported his results, we were taken aback: yes, this had the same limitations that it had always had (namely, 32-bit and lacking many modern Linux facilities), but given how many modern binaries still worked, it was also clear that this was a more viable path than we had thought. Energized by David’s results, Joyent’s Jerry Jelinek picked it up from there, reintegrating the Linux brand into SmartOS in March of last year. There was still much to do of course, but Jerry’s work was a start — and reflected the constraints we imposed on ourselves: do it all in the open; do it all on SmartOS master; develop general-purpose illumos facilities wherever possible; and aim to upstream it all when we were done.

Around this time, I met with Docker CTO Solomon Hykes to share our (new) vision. Honestly, I didn’t know what his reaction would be; I had great respect for what Docker had done and was doing, but didn’t know how he would react to a system bold enough to go its own way at such a fundamental level. Somewhat to my surprise, Solomon was incredibly supportive: not only was he aware of SmartOS, but he was also intimately familiar with zones — and he didn’t need to be convinced of the merits of our approach. Better, he asked a question near and dear to my heart: “Does this mean that I’ll be able to DTrace my Linux apps in a Docker container?” When I indicated that yes, that’s exactly what it would mean, he responded: “It will be the best of all worlds!” That Solomon (and by extension, Docker) was not merely willing but actually eager to see Docker on SmartOS was hugely inspirational to us, and we redoubled our efforts.

Back at Joyent, we worked assiduously under Jerry’s leadership over the spring and summer, and by the fall, we were ready for an attempt on the summit: 64-bit. Like other bringup work we’ve done, this work was terrifying in that we had very little forward visibility, and little ability to parallelize. As if he were Obi-Wan Kenobi meeting Darth Vader in the Death Star, Jerry had to face 64-bit — alone. Fortunately, Jerry didn’t suffer Ben Kenobi’s fate; by late October, he had 64-bit working! With the project significantly de-risked, everything kicked into high gear: Josh Wilsdon, Trent Mick and their team went to work understanding how to integrate SmartDataCenter with Docker; Josh Clulow, Patrick Mooney and I attacked some of the nasty LX-branded zone issues that remained; and Robert Mustacchi and Rob Gulewich worked towards completing their vision for network virtualization. Knowing what we were going to do — and how important open source is to modern infrastructure software in general and Docker in particular — we also took an important preparatory step: we open sourced SmartDataCenter and Manta.

Charged by having all of our work in the open and with a clear line of sight on what we wanted to deliver, progress was rapid. One major question: where to run the Docker daemon? In digging into Docker, we saw that much of what the actual daemon did would need to be significantly retooled to be phrased in terms of not only SmartOS but also SmartDataCenter. However, our excavations also unearthed a gem: the Docker Remote API. Discovering a robust API was a pleasant surprise, and it allowed us to take a different angle: instead of running a (heavily modified) Docker daemon, we could implement a new SDC service to provide a Docker Remote API endpoint. To Docker users, this would look and feel like Docker — and it would give us a foundation that we knew we could develop. At this point, we’re pretty good at developing SDC-based services (microservices FTW!), and progress on the service was quick. Yes, there were some thorny issues to resolve (and definitely note differences between our behavior and the stock Docker behavior!), but broadly speaking we have been able to get it to work without violating the principle of least surprise. And from a Docker developer perspective, having a Docker host that represents an entire datacenter — that is, a (seemingly) galactic Docker host — feels like an important step forward. (Many are as excited by this work as we are, but I think my favorite reaction is the back-handed compliment from Jeff Waugh of Canonical fame; somehow a compliment that is tied to an insult feels indisputably earnest.)

With everything coming together, and with new hardware being stood up for the new service, there was one important task left: we needed to name this thing. (Somehow, “SmartOS + LX-branded zones + SmartDataCenter + sdc-portolan + sdc-docker” was a bit of a mouthful.) As we thought about names, I turned back to Solomon’s words a year ago: if this represented the best of two different worlds, what mythical creatures were combinations of different animals? While this search yielded many fantastic concoctions (a favorite being Manticore — and definitely don’t mess with Typhon!), there was one that stood out: Triton, son of Poseidon. As half-human and half-fish and a god of the deep, Triton represents the combination of two similar but different worlds — and as a bonus, the name rolls off the tongue and fits nicely with the marine metaphor that Docker has pioneered.

So it gives me great pleasure to introduce Triton to the world — a piece of (open source!) engineering brought to you by a cast of thousands, over the course of decades. In a sentence (albeit a wordy one), Triton lets you run secure Linux containers directly on bare metal via an elastic Docker host that offers tightly integrated software-defined networking. The service is live, so if you want to check it out, sign up! If you’re looking for more technical details, check out both Casey’s blog entry and my Future of Docker in Production presentation. If you’d like it on-prem, get in touch. And if you’d prefer to DIY, start with sdc-docker. Finally, forgive me one shameless plug: if you happen to be in the New York City area in early April, be sure to join us at the Container Summit, where we’ll hear perspectives from analysts like Gartner, enterprise users of containers like Lucera and Walmart, and key Docker community members like Tutum, Shopify, and Docker themselves. Should make for an interesting afternoon!

Welcome to Triton — and to the best of all worlds!

March 23, 2015

OpenStackAvailable Hands-on Labs: Oracle OpenStack for Oracle Linux and Oracle VM

March 23, 2015 18:16 GMT

Last year, Hands-on Lab events for OpenStack at Oracle Open World were completely sold out. People who have had no prior experience with OpenStack could not believe how easy it was for them to launch networks and instances and exercise many features of OpenStack. Given the overwhelming demand for the hands-on lab and the positive feedback from the participants, we are announcing its availability to you – all you need is a laptop to download the lab and the 21-page document using the below links in this blog.

This lab takes you through installing and exercising OpenStack. It goes through basic operations, network, storage and guest communication. OpenStack has many more features you can explore using this setup. The lab also shows you how to transfer information in the guest. This is very important when creating templates or when trying to automate deployment process. As we had stated that our goal is to help make OpenStack an enterprise grade solution. The Hands-on Lab gives you a very quick and easy way to learn how to tranfer any key information about your own application  tempate in the guest – a key step in the real world deployment.

We encourage users to go ahead and use this setup to test more OpenStack features. OpenStack is not simple to deal with and usually requires high levels of skill but with this virtual box VM users can try out almost every feature.

Getting started with the Hands-on Lab document is now available to you in following websites:

 - Landing page:

http://www.oracle.com/technetwork/server-storage/openstack/linux/downloads/index.html

- Users can download a pre-installed VirtualBox VM for testing and demo purposes:

Please visit the landing page above to accept the license agreement then download either short or long version. 

Hands-on lab - OpenStack in virtual box (html)


Instructions on how to use the OpenStack VirtualBox image

Download Oracle VM VirtualBox

 If you have any questions, we have an OpenStack Community Forum where you can raise your questions and add your comments.

Jeff SavitOracle VM Server for SPARC 3.2 - Live Migration

March 23, 2015 17:16 GMT

Oracle has just released Oracle VM Server for SPARC release 3.2. This update has been integrated into Oracle Solaris 11.2 beginning with SRU 8.4. Please refer to Oracle Solaris 11.2 Support Repository Updates (SRU) Index [ID 1672221.1]. 

This new release introduces the following features:

Live migration performance and security enhancements

This blog entry details 3.2 improvements to live migration. Oracle VM Server for SPARC has supported live migration since release 2.1, and has been enhanced over time to provide features like cross-CPU live migration to permit migrating domains across different SPARC CPU server types. Oracle VM Server for SPARC 3.2 improves live migration performance and security.

Live migration performance

The time to migrate a domain is reduced in Oracle VM Server for SPARC 3.2 by the following improvements:

These and other changes reduce overall migration time, reduce domain suspension time (the time at the end of migration when the domain is paused to retransmit the last remaining pages). and reduces CPU utilization. In my own testing I've seen speedups from 50% to 500% faster migration depending on the guest domain activity and memory size. Others may experience different times, depending on network and CPU speeds and domain configuration.

This improvement is available on all SPARC servers supporting Oracle VM Server for SPARC, including the older UltraSPARC T2, UltraSPARC T2 Plus, and SPARC T3 systems. Some speedups are only be available for guest domains running Solaris 11.2 SRU 8 or later, and will not be available on Solaris 10. Solaris 10 guests must run Solaris 10/09 or later, as that release introduced code for cooperative live migration that works with the hypervisor.

Live migration security

Oracle VM Server for SPARC 3.2 improves live migration security by adding certificate-based authentication and supporting the FIPS 140-2 standard.

Certificate based authentication

Live migration requires mutual authentication between the source and target servers. The simplest way to initiate live migration is to issue an "ldm migrate" command on the source system specifying an adminstrator password on the target system, or point to a root-readable file containing the target system's password. That is cumbersome, and not ideal for security. Oracle VM Server for SPARC 3.2 adds a secure, scalable way to permit password-less live migration using certificates that prevents man-in-the-middle attacks.

This is accomplished by using SSL certificates to establish a trust relationship between different server's control domainss as described at Configuring SSL Certificates for Migration. In brief, a certificate is securely copied from the remote system's /var/opt/SUNWldm/server.crt to the local system's /var/opt/SUNWldm/trust and a symbolic is made from certificate in the ldmd trusted certificate directory to /etc/certs/CA. After the certificate and ldmd services are restarted, the two control domains can securely communicate with one another without passwords. This enhancement is available on all servers supporting Oracle VM Server for SPARC, using either Solaris 10 or Solaris 11.

FIPS 140-2 Mode

The Oracle VM Server for SPARC Logical Domains Manager can be configured to perform domain migrations using the Oracle Solaris FIPS 140-2 certified OpenSSL libraries as described at http://docs.oracle.com/cd/E48724_01/html/E48732/fipsmodeformigration.html#scrolltoc. When this is in effect, migrations are conformant with this standard, and can only done between servers that are all in FIPS 140-2 mode.

For more information, please see Using a FIPS 140 Enabled System in Oracle® Solaris 11.2. This enhancement requires that the control domain run Oracle Solaris 11.2 SRU 8.4 or later.

Where to get more information

For additional resources about Oracle VM Server for SPARC 3.2, please see the documentation at http://docs.oracle.com/cd/E48724_01/index.html, especially the What's New page, the Release Notes and the Administration Guide

Robert MilkowskiPhysical Locations of PCI SSDs

March 23, 2015 14:26 GMT
The latest update to Solaris 11 (SRU 11.2.8.4.0) has a new feature - it can identify physical locations of F40 and F80 PCI SSDs cards - it registers them under the Topology Framework.

Here is an example diskinfo output on x4-2l server with 24 SSDs in front presented as JBOD, 2x SSDs in the rear mirrored with RAID controller (for OS), and 4x PCI F80 cards (each card presents 4 LUNs):

$ diskinfo
D:devchassis-path c:occupant-compdev
--------------------------------------- ---------------------
/dev/chassis/SYS/HDD00/disk c0t55CD2E404B64A3E9d0
/dev/chassis/SYS/HDD01/disk c0t55CD2E404B64B1ABd0
/dev/chassis/SYS/HDD02/disk c0t55CD2E404B64B1BDd0
/dev/chassis/SYS/HDD03/disk c0t55CD2E404B649E02d0
/dev/chassis/SYS/HDD04/disk c0t55CD2E404B64A33Ed0
/dev/chassis/SYS/HDD05/disk c0t55CD2E404B649DB5d0
/dev/chassis/SYS/HDD06/disk c0t55CD2E404B649DBCd0
/dev/chassis/SYS/HDD07/disk c0t55CD2E404B64AB2Fd0
/dev/chassis/SYS/HDD08/disk c0t55CD2E404B64AC96d0
/dev/chassis/SYS/HDD09/disk c0t55CD2E404B64A580d0
/dev/chassis/SYS/HDD10/disk c0t55CD2E404B64ACC5d0
/dev/chassis/SYS/HDD11/disk c0t55CD2E404B64B1DAd0
/dev/chassis/SYS/HDD12/disk c0t55CD2E404B64ACF1d0
/dev/chassis/SYS/HDD13/disk c0t55CD2E404B649EE1d0
/dev/chassis/SYS/HDD14/disk c0t55CD2E404B64A581d0
/dev/chassis/SYS/HDD15/disk c0t55CD2E404B64AB9Cd0
/dev/chassis/SYS/HDD16/disk c0t55CD2E404B649DCAd0
/dev/chassis/SYS/HDD17/disk c0t55CD2E404B6499CBd0
/dev/chassis/SYS/HDD18/disk c0t55CD2E404B64AC98d0
/dev/chassis/SYS/HDD19/disk c0t55CD2E404B6499B7d0
/dev/chassis/SYS/HDD20/disk c0t55CD2E404B64AB05d0
/dev/chassis/SYS/HDD21/disk c0t55CD2E404B64A33Fd0
/dev/chassis/SYS/HDD22/disk c0t55CD2E404B64AB1Cd0
/dev/chassis/SYS/HDD23/disk c0t55CD2E404B64A3CFd0
/dev/chassis/SYS/HDD24 -
/dev/chassis/SYS/HDD25 -
/dev/chassis/SYS/MB/PCIE1/F80/LUN0/disk c0t5002361000260451d0
/dev/chassis/SYS/MB/PCIE1/F80/LUN1/disk c0t5002361000258611d0
/dev/chassis/SYS/MB/PCIE1/F80/LUN2/disk c0t5002361000259912d0
/dev/chassis/SYS/MB/PCIE1/F80/LUN3/disk c0t5002361000259352d0
/dev/chassis/SYS/MB/PCIE2/F80/LUN0/disk c0t5002361000262937d0
/dev/chassis/SYS/MB/PCIE2/F80/LUN1/disk c0t5002361000262571d0
/dev/chassis/SYS/MB/PCIE2/F80/LUN2/disk c0t5002361000262564d0
/dev/chassis/SYS/MB/PCIE2/F80/LUN3/disk c0t5002361000262071d0
/dev/chassis/SYS/MB/PCIE4/F80/LUN0/disk c0t5002361000125858d0
/dev/chassis/SYS/MB/PCIE4/F80/LUN1/disk c0t5002361000125874d0
/dev/chassis/SYS/MB/PCIE4/F80/LUN2/disk c0t5002361000194066d0
/dev/chassis/SYS/MB/PCIE4/F80/LUN3/disk c0t5002361000142889d0
/dev/chassis/SYS/MB/PCIE5/F80/LUN0/disk c0t5002361000371137d0
/dev/chassis/SYS/MB/PCIE5/F80/LUN1/disk c0t5002361000371435d0
/dev/chassis/SYS/MB/PCIE5/F80/LUN2/disk c0t5002361000371821d0
/dev/chassis/SYS/MB/PCIE5/F80/LUN3/disk c0t5002361000371721d0

Let's create a ZFS pool on top of the F80s and see zpool status output:
(you can use the SYS/MB/... names when creating a pool as well)

$ zpool status -l XXXXXXXXXXXXXXXXXXXX-1
pool: XXXXXXXXXXXXXXXXXXXX-1
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Sat Mar 21 11:31:01 2015
config:

NAME STATE READ WRITE CKSUM
XXXXXXXXXXXXXXXXXXXX-1 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE4/F80/LUN0/disk ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE1/F80/LUN1/disk ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE4/F80/LUN1/disk ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE1/F80/LUN3/disk ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE4/F80/LUN3/disk ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE1/F80/LUN2/disk ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE4/F80/LUN2/disk ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE1/F80/LUN0/disk ONLINE 0 0 0
mirror-4 ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE2/F80/LUN3/disk ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE5/F80/LUN0/disk ONLINE 0 0 0
mirror-5 ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE2/F80/LUN2/disk ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE5/F80/LUN1/disk ONLINE 0 0 0
mirror-6 ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE2/F80/LUN1/disk ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE5/F80/LUN3/disk ONLINE 0 0 0
mirror-7 ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE2/F80/LUN0/disk ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE5/F80/LUN2/disk ONLINE 0 0 0

errors: No known data errors

It also means that all FMA alerts should include the physical path as well, which should make identification of a given F80/LUN, if something goes wrong, so much easier.

Sunay TripathiNetvisor Analytics: Secure the Network/Infrastructure

March 23, 2015 04:55 GMT

We recently heard President Obama declare cyber security as one of his top priorities and we saw in recent time major corporations suffer tremendously from breaches and attacks. The most notable one is the breach at Anthem. For those who are still unaware, Anthem is the umbrella company that runs Blue Shield and Blue Cross Insurance as well. The attackers had access to people details, social security, home addresses, and email address for a period of month. What was taken and extent of the damage is still guesswork because network is a black hole that needs extensive tools to figure out what is happening or what happened. This also means the my family is impacted and since we use Blue Shield at Pluribus Networks, every employee and their family is also impacted prompting me to write this blog and a open invitation to the Anthem people and the government to pay attention to the new architecture that makes network play a role similar to NSA in helping protect the infrastructure. It all starts with converting the network from a black hole to something we can measure and monitor. To make this meaningful, lets look at state of the art today and why it is of no use and a step-by-step example on how Netvisor analytics help you see everything and take action on it.

Issues with existing networks and modern attack vector

In a typical datacenter or enterprise, the typical switches and routers are dumb packet switching devices. They switch billions of packets per second between servers and clients at sub micro second latencies using very fast ASICs but have no capability to record anything. As such, external optical TAPs and monitoring networks have to be built to get a sense of what is actually going on in the infrastructure. The figure below shows what monitoring today looks like: Traditional Network Monitoring

This is where the challenges start coming together. The typical enterprise and datacenter network that connects the servers is running at 10/40Gbps today and moving to 100Gbps tomorrow. These switches have typically 40-50 servers connected to them pumping traffic at 10Gbps. There are 3 possibilities to see everything that is going on:

  1. Provision a fiber optics tap at every link and divert a copy of every packet to the monitoring tools. Since the fiber optics tap and passive, you have to copy every packet and the monitoring tools need to deal with 500M to 1B packets per second from each switch. Assume a typical pod of 15-20 rack and 30-40 switches (who runs without HA), the monitoring tools need to deal with 15B to 40B packets per second. The monitoring Software has to look inside each packet and potentially keep state to understand what is going on which requires very complex software and amazing amount of hardware. For reference, a typical high-end dual socket server can get 15-40M packets into the system but has no CPU left to do anything else. We will need 1000 such servers plus associated monitoring network apart from monitoring software so we are looking at 15-20 racks of just monitoring equipment. Add the monitoring software and storage etc, the cost of monitoring 15-20 racks of servers is probably 100 times more then the servers itself.
  2. Selectively place fiber optic taps at uplinks or edge ports gets us back into inner network becoming a black hole and we have no visibility into what is going on. Key things we learnt from NSA and Homeland security is that a successful defense against attack requires extensive monitoring and you just can’t monitor the edge.
  3. Using the switch them selves to selectively mirror traffic to monitoring tools. A more popular approach these days but this is built of sampling where the sampling rates are typically 1 in 5000 to 10000 packets. Better then nothing but monitoring software has nowhere close to meaningful visibility and cost goes up exponentially as more switches get monitored (monitoring fabric needs more capacity, the monitoring software gets more complex and needs more hardware resources).

So what is wrong with just sampling and monitoring/securing the edge. The answer is pretty obvious. We do that today yet the break in keeps happening. There are many things contributing to it starting from the attack vector itself has shifted. Its not that employees in these companies have become careless but more to do with the myriad software and applications becoming prevalent in a enterprise or a datacenter. Just look at the amount of software and new applications that gets deployed everyday from so many sources and the increasing hardware capacity underneath. Any of these can get exploited to let the attackers in. Once the attackers has access to inside, the attack on actual critical servers and applications come from within. Lot of these platform and application goes home with employees at night where they are not protected by corporate firewalls and can easily upload data collected during the day (assuming the corporate firewall managed to block any connections). Every home is online today and most devices are constantly on network so attackers typically have easy access to devices at home and the same devices go to work behind the corporate firewalls.

Netvisor provides the distributed security/monitoring architecture

The goal of Netvisor is to make a switch programmable like a server. Netvisor leverages the new breed of Open Compute Switches by memory mapping the switch chip into the kernel over PCI-Express and taking advantage of powerful control processors, large amounts of memory, and storage built into the switch chassis. Figure below contrasts Netvisor on a Server-Switch using the current generation of switch chips with a traditional switch where the OS runs on a low powered control processor and low speed busses.

secure2Given that cheapest form of compute these days is a Intel Rangeley class processor with 8-16Gb memory, all the ODM switches are using that as a compute complex. Facebook’s Open Compute Program made this a standard allowing all modern switches to have a small server inside them that lays the foundation of our distributed analytics architecture on the switches without requiring any TAPs and separate monitoring network as shown in the Figure below.

secure3 Each Server-Switch now becomes in network analytics engine along with doing layer 2 switching and layer 3 routing. Netvisor analytics architecture takes advantage of following:

So Netvisor can filter the appropriate packets in switch TCAM while switching 1.2 to 1.8Tbps of traffic at line rate and process millions of hardware filtered flows in software to keep state of millions of connection in switch memory. As such, each switch in the fabric becomes a network DVR or Time machine and records every application and VM flow it sees. With a Server-Switch with Intel Rangeley class processor, 16Gb of memory, each Netvisor instance is capable of tracking 8-10million application flows at any given time. These Server-Switches have a list price of under $20k from Pluribus Networks and are cheaper then your typical switch that just does dumb packet switching.

While the servers have to be connected to the network to provide service (you can’t just block all traffic to the servers), the Netvisor on switch can be configured to not allow any connections into it control plane (only access via monitors) or from selected client only and much easier to defend against attack and provide a uncompromised view of infrastructure that is not impacted even when servers get broken into.

Live Example of Netvisor Analytics (detect attack/take action via vflow)

The Analytics application on Netvisor is a Big Data application where each Server-Switch collects millions of records and when a user runs query from any instance, the data is collected from

Each Server-Switch and presented in coherent manner. The user has full scripting support along with REST, C, and Java APIs to extract the information in whatever format he wants and exports it to any application for further analysis.

We can look at some live example form Pluribus Networks internal network that uses Netvisor based fabric to meet all its network, analytics, security and services needs. The fabric consists of following switches

secure4

To look at top 10 client-server pair based on highest rate of TCP SYN is available using following query

secure6

Seems like IP address 10.9.10.39 is literally DDOS’ing server 10.9.10.75. That is very interesting. But before digging into that, lets look at which client-server pairs are most active at the moment. So instead of sorting on SYN, we sort on EST (for established) and limit the output to top 10 entries per switch (keep in mind each switch has millions of records that goes back days and months.

secure7

It appears that IP address which had a very high SYN rate do not show up in established list at all. Since the failed SYN showed up approx. 11h ago (around 10.30am today morning) so lets look at all the connection with src-ip being 10.9.10.39

secure8

This shows that not a single connection was successfully established. For sanity sake, lets look at the top connections in terms of total bytes during the same period

secure9

So the mystery deepens. The dst-port in question was 23398 which is not a well known port. So lets look at the same connection signature. Easiest is to look at all connections with destination port 23398.

secure10

It appears that multiple clients have the same signature. Obviously we dig in deeper without limiting any output and look at this from many angles. After some investigation, it appears that this is not a legitimate application and no developer in Pluribus owns these particular IP addresses. Our port analytics showed that these IP belong to Virtual Machines that were all created few days back around same time. The prudent thing is to block this port all together across entire fabric quickly using the vflow API

secure11It is worth noting that we used the scope fabric to create this flow with action drop to block it across the entire network (on every switch). We could have used a different flow action to look at this flow live or record all traffic matching this flow across the network.

Outlier Analysis

Given that Netvisor Analytics is not statistical sample and accurately represent every single session between the servers and/or Virtual Machines, most customer have some form of scripting and logging mechanism that they deploy to collect this information. The example below shows the information person is really interested in by selecting the columns he wants to see

secure12The same command is run from a cron job every night at mid night via a script with a parse able delimiter of choice that gets recorded in flat files and moved to different location.

secure13Another script can actually record all destination IP address and sort them and compares them from the previous day to see which new IP address showed up in outbound list and similarly for inbound list. The IP addresses where both source and destination were local are ignored but IP addresses where either is outside and fed into other tool which keep track of quality of the IP address against attacker databases. Anything suspicious is flagged immediately. Similar scripts are used for compliance to ensure there was no attempt to connect outside of legal services or servers didn’t issue outbound connection to employees laptops (to detect malware).

Summary

More investigations later showed that we didn’t had a intruder in our live example but one of the developer had created bunch of virtual machines cloning some disk image which had this application which was misbehaving. Still unclear where it found the server ip address from but things like this and actual attacks have happened in past at Pluribus Networks and Netvisor analytics helps us track and take action. The network is not a black hole but shows the weakness of our application running on servers and virtual machines.

The description of scripts in outlier analysis is deliberately vague since it relates to a customer security procedure but are building more sophisticated analysis engines to detect anomalies in real time against normal behavior.


March 22, 2015

Darryl GoveNew Studio C++ blogger

March 22, 2015 00:16 GMT

Please welcome another of my colleagues to blogs.oracle.com. Fedor's just posted some details about getting Boost to compile with Studio.

March 21, 2015

Robert MilkowskiManaging Solaris with RAD

March 21, 2015 01:30 GMT
Solaris 11 provides "The Remote Administration Daemon, commonly referred to by its acronymand command name, rad, is a standard system service thatoffers secure, remote administrative access to an Oracle Solaris system."

RAD is essentially an API to programmatically manage and query different Solaris subsystems like networking, zones, kstat, smf, etc.

Let's see an example on how to use it to list all zones configured on a local system.

# cat zone_list.py
#!/usr/bin/python

import rad.client as radcli
import rad.connect as radcon
import rad.bindings.com.oracle.solaris.radm.zonemgr_1 as zbind

with radcon.connect_unix() as rc:
zones = rc.list_objects(zbind.Zone())
for i in range(0, len(zones)):
zone = rc.get_object(zones[i])
print "zone: %s (%S)" % (zone.name, zone.state)
for prop in zone.getResourceProperties(zbind.Resource('global')):
if prop.name == 'zonename':
continue
print "\t%-20s : %s" % (prop.name, prop.value)

# ./zone_list.py
zone: kz1 (configured)
zonepath: :
brand : solarisk-kz
autoboot : false
autoshutdown : shutdown
bootargs :
file-mac-profile :
pool :
scheduling-class :
ip-type : exclusive
hostid : 0x44497532
tenant :
zone: kz2 (installed)
zonepath: : /system/zones/%{zonename}
brand : solarisk-kz
autoboot : false
autoshutdown : shutdown
bootargs :
file-mac-profile :
pool :
scheduling-class :
ip-type : exclusive
hostid : 0x41d45bb
tenant :

Or another example to show how to create a new Kernel Zone with autoboot property set to true:

#!/usr/bin/python

import sys

import rad.client
import rad.connect
import rad.bindings.com.oracle.solaris.radm.zonemgr_1 as zonemgr

class SolarisZoneManager:
def __init__(self):
self.description = "Solaris Zone Manager"

def init_rad(self):
try:
self.rad_instance = rad.connect.connect_unix()
except Exception as reason:
print "Cannoct connect to RAD: %s" % (reason)
exit(1)

def get_zone_by_name(self, name):
try:
pat = rad.client.ADRGlobPatter({'name# : name})
zone = self.rad_instance.get_object(zonemgr.Zone(), pat)
except rad.client.NotFoundError:
return None
except Exception as reason:
print "%s: %s" % (self.__class__.__name__, reason)
return None

return zone

def zone_get_resource_prop(self, zone, resource, prop, filter=None):
try:
val = zone.getResourceProperties(zonemgr.Resource(resource, filter), [prop])
except rad.client.ObjectError:
return None
except Exception as reason:
print "%s: %s" % (self.__class__.__name__, reason)
return None

return val[0].value if val else None

def zone_set_resource_prop(self, zone, resource, prop, val):
current_val = self.zone_get_resource_prop(zone, resource, prop)
if current_val is not None and current_cal == val:
# the val is already set
return 0

try:
if current_cal is None:
zone.addResource(zonemgr.Resource(resource, [zonemgr.Property(prop, val)]))
else:
zone.setResourceProperties(zonemgr.Resource(resource), [zonemgr.Property(prop, val)])
except rad.client.ObjectError as err:
print "Failed to set %s property on %s resource for zone %s: %s" % (prop, resource, zone.name, err)
return 0

return 1

def zone_create(self, name, template):
zonemanager = self.rad_instance.get_object(zonemg.ZoneManager())
zonemanager.create(name, None, template)
zone = self.get_zone_by_name(name)

try:
zone.editConfig()
self.zone_set_resource_prop(zone, 'global', 'autoboot', true')
zone.commitConfig()
except Exception as reason:
print "%s: %s" % (self.__class__.__name__, reason)
return 0

return 1

x = SolarisZoneManager()
x.init_rad()
if x.zone_create(str(sys.argv[1]), 'SYSsolaris-kz'):
print "Zone created succesfully." 

There are many simple examples in  zonemgr.3rad man page, and what I found very useful is to look at solariszones/driver.py from OpenStack. It is actually very interesting that OpenStack is using RAD on Solaris.

RAD is very powerful, and with more modules being constantly added it is becoming a  powerful programmatic API to remotely manage Solaris systems. It is also very useful if you are writing components to a configuration management system for Solaris.

What's the most anticipated RAD module currently missing in stable Solaris? I think it is ZFS module... 

Robert MilkowskiZFS: Persistent L2ARC

March 21, 2015 01:00 GMT
Solaris SRU 11.2.8.4.0 delivers persistent L2ARC. What is interesting about it is that it stores raw ZFS blocks, so if you enabled compression then L2ARC will also store compressed blocks (so it can store more data). Similarly with encryption.

March 18, 2015

The Wonders of ZFS StorageOracle Expands Storage Efficiency Leadership

March 18, 2015 15:00 GMT

On March 17, 2015, a new SPC-2 result was posted for the Oracle ZFS Storage ZS4-4 at 31,486.23 SPC-2 MBPSTM. (The SPC-2 benchmark, for those unfamiliar, is an independent, industry-standard performance benchmark test for sequential workload that simulates large file operations, database queries, and video streaming.) This is the best throughput number ever posted for a ~$1/2 million USD system, and its on-par with the best in terms of raw sequential performance. (Note that the SPC-2 Total PriceTM metric includes three years of support costs.) While achieving a raw performance result of this level is impressive (and it is fast enough to put us in the #3 overall performance spot, with Oracle ZFS Storage Appliances now holding 3 of the top 5 SPC-2 MBPSTM benchmark results), it is even more impressive when looked at within the context of the “Top Ten” SPC-2 results.

System

SPC-2
MBPS

$/SPC-2
MBPS

TSC Price

Results
Identifier

HP XP7 storage

43,012.52

$28.30

$1,217,462

B00070

Kaminario K2

33,477.03

$29.79

$997,348.00

B00068

Oracle ZFS Storage ZS4-4

31,486.22

$17.09

$538,050

B00072

Oracle ZFS Storage ZS3-4

17,244.22

$22.53

$388,472

B00067

Oracle ZFS Storage ZS3-2

16,212.66

$12.08

$195,915

BE00002

Fujitsu ETERNUS DX8870 S2

16,038.74

$79.51

$1,275,163

B00063

IBM System Storage DS8870

15,423.66

$131.21

$2,023,742

B00062

IBM SAN VC v6.4

14,581.03

$129.14

$1,883,037

B00061

Hitachi Virtual Storage Platform (VSP)

13,147.87

$95.38

$1,254,093

B00060

HP StorageWorks P9500 XP Storage Array

13,147.87

$88.34

$1,161,504

B00056

Source: “Top Ten” SPC-2 Results, http://www.storageperformance.org/results/benchmark_results_spc2_top-ten

SPC-2 MBPS = the Performance Metric
$/SPC-2 MBPS = the Price-Performance Metric
TSC Price = Total Cost of Ownership Metric
Results Identifier = A unique identification of the result Metric

Complete SPC-2 benchmark results may be found at
http://www.storageperformance.org/results/benchmark_results_spc2.

SPC-2, SPC-2/E, SPC-2 MBPS, SPC-2 Price-Performance, and SPC-2 TSC are trademarks of Storage Performance Council (SPC).

Results as of March 16, 2015, for more information see http://www.storageperformance.org

Perhaps the better way to look at this top ten list is with a graphical depiction. When you lay it out with a reverse axis for SPC-2 Total PriceTM, you can get an “up-and-to-the-right” is good effect, with a “fast and cheap” quadrant.

Looking at it this way, a couple of things are clear. Firstly, Oracle ZFS Storage ZS4-4 is far and away the fastest result anywhere near its price point. Sure, there are two faster systems, but they are way more expensive, about a million USD or more. So you could almost buy two ZS4-4 systems for the same money. A second point this brings up is that the ZS3-2 model is the cheapest system in the top ten, and has performance on par with or better than some of the very expensive systems in the lower-left quadrant.

In fact, the ZS3-2 model has for some time now held the #1 position in SPC-2 price-performanceTM, with a score of $12.08 per SPC-2 MBPSTM. So we already have the performance efficiency leader in terms of the overall SPC-2 price-performanceTM metric, as well.

Of course, the efficiency story doesn’t stop with performance. There’s also operational efficiency to consider. Many others have blogged about our D Trace storage analytics features, which provide deep insight and faster time to resolution than about anything else out there, and also our simple yet powerful browser user interface (BUI) and command line interface (CLI) tools, so I won’t go deeply into all that. But, suffice it to say that we can get jobs done faster, saving operational costs over time. Strategic Focus Group did a usability study versus NetApp and found the following results:

Source: Strategic Focus Group Usability Comparison: https://go.oracle.com/LP=4206/?elqCampaignId=6667

All of this goes to show that the ZFS Storage Appliance offers superior storage efficiency from performance, capex, and opex perspectives. This is true in any application environment. In fact all the above metrics are storage-specific and agnostic as to who your platform or application vendors are. Of course, as good as this general storage efficiency is, it’s even better when you are looking at an Oracle Database storage use case, where our unique co-engineered features like Oracle Intelligent Storage Protocol, our deep snapshot and Enterprise Manager integration, and exclusive options like Hybrid Columnar Compression come into play to make the Oracle ZFS Storage Appliance even more efficient.

The Oracle ZFS Storage Appliance is “efficient” by many different metrics. All things taken in total, we think it’s the most efficient storage system you can buy.

March 17, 2015

Joerg MoellenkampEvent accouncement - Oracle Business Breakfast - Anmeldelink Netzwerktechnik und -software von Oracle

March 17, 2015 09:19 GMT
Wie versprochen der Anmeldelink für das Frühstück am 27. März.2015 in Hamburg: Ihr könnt Euch hier für das Oracle Business Breakfast anmelden.

March 16, 2015

OpenStackOpenStack Summit Vancouver - May 18-22

March 16, 2015 23:42 GMT

The next OpenStack developers and users summit will be in Vancouver. Oracle will again be a sponsor of this event, and we'll have a bunch of our team present from Oracle Solaris, Oracle Linux, ZFS Storage Appliance and more. The summit is a great opportunity to sync up on the latest happenings in OpenStack. By this stage the 'Kilo' release will be out and the community will be in full plan mode for 'Liberty'. Join us there and see what the Oracle teams have been up to recently!

-- Glynn Foster

March 13, 2015

Joerg MoellenkampEvent accouncement - Oracle Business Breakfast - Netzwerktechnik und -software von Oracle

March 13, 2015 12:18 GMT
Voraussichtlich am 27. März planen wir ein weiteres Oracle Business Frühstück: Dieses Frühstück wendet sich einem etwas hardwarenäherem Thema zu. Oracle bietet eine Reihe von Produkten rund um Netzwerke an. Dies reicht von 40 Gbit/s Ethernet Switches über Infinibandkomponenten bis hin zu Komponenten, die es erlauben SAN und LAN Netzwerke auf eine einzige Infinibandinfrastruktur zu konvergieren. Im ersten Teil möchten wir zu dieser Hardware einen Überblick gegeben.

Der zweite Teil ist praktischer Natur und soll Features in Solaris näher beleuchten, die mit dem grossen Thema Netzwerke in Verbindung stehen. Dies recht von SRIOV-basierten VNICs bis hin zu VXLANs, vom Integrated Loadbalancer bis hin zu Neuerungen hinsichtlich des Monitoring von Netzwerkstraffic.

Am Ende möchten wir eine eine Einführung in die Möglichkeiten von Infiniband und dessen grundlegende Konfiguration in Solaris 11.2 und Oracle Linux geben. Dies beinhaltet auch die Nutzung der besonders schnellen und effizienten RDMA-Varianten von NFS und iSCSI.

Ein Anmeldungslink wird folgen, sobald verfügbar.

March 12, 2015

James DickensIs Google server nodes the new Mainframe

March 12, 2015 00:11 GMT


A friend mentioned that small/middle companies are using mainframes, but I think an argument could be made that Google and Facebook and friends are really creating the new mainframe with there various server nodes that are often made up of custom tasked designed hardware/servers.

One of the explanations of what made a mainframe a mainframe. I learned over the years was that a mainframe had custom hardware for specific components of the system, such as a hard disk controller that could be given a list of blocks or files and it would go fetch the data and place them into the system memory or returned to the program. Or a network controller that would do the same with network IO.

Has Google, Facebook and friends created a network by using one or more of their server nodes to make up what could be effectively be called a mainframe. For example if you consider 1 or 10, or even a 1000 server nodes running memcached a storage controller, the programmer can in a single function task the memcached servers with fetching and returning literally 1000's of requests for data all returned over the network with what many people consider the equivalent of drinking from a fire hose, because a 1000 nodes are essentially trying to return the data at wire speed. Various other technologies exist that allow other servers to return or store various chunks of data to other servers. Such as a restful API.

Since a mainframe is not limited to a single box or a size of box why not consider one rack of server nodes or even 1 or more rows of a data center a single system. Projects like Mesos and kebernetes allow the programmer to consider the full cluster of nodes a single system. Surely back in the 60's, 70's and 80's the mainframe were made from a lot of custom parts and not commodity parts, but by doing custom nodes, some of custom nodes are tuned for disk storage, others are tuned specifically for networking or ram based nodes, in the future they will be moving to GPU capable nodes, that have one or more GPU processors on board. Yes it would be the equivalent of making mainframe out of lego blocks.



James DickensFirst new post after a long rest...

March 12, 2015 00:10 GMT


Well time to wake up this blog, will be posting stuff again, beware.


March 10, 2015

Jeff VictorMaintaining Configuration Files in Solaris 11.2

March 10, 2015 14:53 GMT

Introduction

Have you used Solaris 11 and wondered how to maintain customized system configuration files? In the past, and on other Unix/Linux systems, maintaining these configuration files was fraught with peril: extra bolt-on tools are needed to track changes, verify that inappropriate changes were not made, and fix them when something broke them.

A combination of features added to Solaris 10 and 11 address those problems. This blog entry describes the current state of related features, and demonstrates the method that was designed and implemented to automatically deploy and track changes to configuration files, verify consistency, and fix configuration files that "broke." Further, these new features are tightly integrated with the Solaris Service Management Facility introduced in Solaris 10 and the packaging system introduced in Solaris 11.

Background

Solaris 10 added the Service Management Facility, which significantly improved on the old, unreliable pile of scripts in /etc/rc#.d directories. This also allowed us to move from the old model of system configuration information stored in ASCII files to a database of configuration information. The latter change reduces the risk associated with manual or automated modifications of text files. Each modification is the result of a command that verifies the correctness of the change before applying it. That verification process greatly reduces the opportunities for a mistake that can be very difficult to troubleshoot.

During updates to Solaris 10 and 11 we continued to move configuration files into SMF service properties. However, there are still configuration files, and we wanted to provide better integration between the Solaris 11 packaging facility (IPS), and those remaining configuration files. This blog entry demonstrates some of that integration, using features added up through Solaris 11.1.

Many Solaris systems need customized email delivery rules. In the past, providing those rules required replacing /etc/mail/sendmail.cf with a custom file. However, this created the need to maintain that file - restoring it after a system udpate, verifying its integrity periodically, and potentially fixing it if someone or something broke it.

Method

IPS provides the tools to accomplish those goals, specifically:

  1. maintain one or more versions of a configuration file in an IPS repository
  2. use IPS and AI (Automated Installer) to install, update, verify, and potentially fix that configuration file
  3. automatically perform the steps necessary to re-configure the system with a configuration file that has just been installed or updated.

The rest of this assumes that you understand Solaris 11 and IPS.

In this example, we want to deliver a custom sendmail.cf file to multiple systems. We will do that by creating a new IPS package that contains just one configuration file. We need to create the "precursor" to a sendmail.cf file, (sendmail.mc) that will be expanded by sendmail when it starts. We also need to create a custom manifest for the package. Finally, we must create an SMF service profile, which will cause Solaris to understand that a new sendmail configuration is available and should be integrated into its database of configuration information.

Here are the steps in more detail.

  1. Create a directory ("mypkgdir") that will hold the package manifest and a directory ("contents") for package contents.
    $ mkdir -p mypkgdir/contents
    $ cd mypkgdir
    
    Then create the configuration file that you want to deploy with this package. For this example, we simply copy an existing configuration file.
    $ cp /etc/mail/cf/cf/sendmail.mc contents/custom_sm.mc
    
  2. Create a manifest file in mypkgdir/sendmail-config.p5m: (the entity that owns the computers is the fictional corporation Consolidated Widgets, Inc.)
    set name=pkg.fmri value=pkg://cwi/site/sendmail-config@8.14.9,1.0
    set name=com.cwi.info.name value=Solaris11sendmail
    set name=pkg.description value="ConWid sendmail.mc file for Solaris 11, accepts only local connections."
    set name=com.cwi.info.description value="Sendmail configuration"
    set name=pkg.summary value="Sendmail configuration"
    set name=variant.opensolaris.zone value=global value=nonglobal
    set name=com.cwi.info.version value=8.14.9
    set name=info.classification value=org.opensolaris.category.2008:System/Core
    set name=org.opensolaris.smf.fmri value=svc:/network/smtp:sendmail
    depend fmri=pkg://solaris/service/network/smtp/sendmail type=require
    file custom_sm.mc group=mail mode=0444 owner=root \
       path=etc/mail/cf/cf/custom_sm.mc
    file custom_sm_mc.xml group=mail mode=0444 owner=root \
       path=lib/svc/manifest/site/custom_sm_mc.xml        \
       restart_fmri=svc:/system/manifest-import:default   \
       refresh_fmri=svc:/network/smtp:sendmail            \
       restart_fmri=svc:/network/smtp:sendmail
    
    
    The "depend" line tells IPS that the package smtp/sendmail must already be installed on this system. If it isn't, Solaris will install that package before proceeding to install this package.
    The line beginning "file custom_sm.mc" gives IPS detailed metadata about the configuration file, and indicates the full pathname - within an image - at which the macro should be stored. The last line specifies the local file name of of the service profile (more on that later), and the location to store it during package installation. It also lists three actuators: SMF services to refresh (re-configure) or restart at the end of package installation. The first of those imports new manifests and service profiles. Importing the service profile changes the property path_to_sendmail_mc. The other two re-configure and restart sendmail. Those two actions expand and then use the new configuration file - the goal of this entire exercise!

  3. Create a service profile:
    $ svcbundle -o contents/custom_sm_mc.xml -s bundle-type=profile \
        -s service-name=network/smtp -s instance-name=sendmail -s enabled=true \
        -s instance-property=config:path_to_sendmail_mc:astring:/etc/mail/cf/cf/custom_sm.mc
    
    That command creates the file custom_sm_mc.xml, which describes the profile. The sole profile of that profile is to set the sendmail service property "config/path_to_sendmail_mc" to the name of the new sendmail macro file.

  4. Verify correctness of the manifest. In this example, the Solaris repository is mounted at /mnt/repo1. For most systems, "-r" will be followed by the repository's URI, e.g. http://pkg.oracle.com/solaris/release/ or a data center's repository.
    $ pkglint -c /tmp/pkgcache -r /mnt/repo1 sendmail-config.p5m
    Lint engine setup...
    Starting lint run...
    
    $
    
    As usual, the lack of output indicates success.

  5. Create the package, make it available in a repo to a test IPS client.
    Note: The documentation explains these steps in more detail.
    Note: this example stores a repo in /var/tmp/cwirepo. This will work, but I am not suggesting that you place repositories in /var/tmp. You should a repo in a directory that is publicly available.
    $ pkgrepo create /var/tmp/cwirepo
    $ pkgrepo -s /var/tmp/cwirepo set publisher/prefix=cwi
    $ pkgsend -s /var/tmp/cwirepo publish -d contents sendmail-config.p5m
    pkg://cwi/site/sendmail-config@8.14.9,1.0:20150305T163445Z
    PUBLISHED
    $ pkgrepo verify -s /var/tmp/cwirepo
    Initiating repository verification.
    $ pkgrepo info -s /var/tmp/cwirepo
    PUBLISHER PACKAGES STATUS           UPDATED
    cwi       1        online           2015-03-05T16:39:13.906678Z
    $ pkgrepo list -s /var/tmp/cwirepo
    PUBLISHER NAME                                          O VERSION
    cwi       site/sendmail-config                            8.14.9,1.0:20150305T163913Z
    $ pkg list -afv -g /var/tmp/cwirepo
    FMRI                                                                         IFO
    pkg://cwi/site/sendmail-config@8.14.9,1.0:20150305T163913Z                   ---
    
    

With all of that, you can use the usual IPS packaging commands. I tested this by adding the "cwi" publisher to a running native Solaris Zone and making the repo available as a loopback mount:

# zlogin testzone mkdir /var/tmp/cwirepo
# zonecfg -rz testzone
zonecfg:testzone> add fs
zonecfg:testzone:fs> set dir=/var/tmp/cwirepo
zonecfg:testzone:fs> set special=/var/tmp/cwirepo
zonecfg:testzone:fs> set type=lofs
zonecfg:testzone:fs> end
zonecfg:testzone> commit
zone 'testzone': Checking: Mounting fs dir=/var/tmp/cwirepo
zone 'testzone': Applying the changes
zonecfg:testzone> exit
# zlogin testzone
root@testzone:~# pkg set-publisher -g /var/tmp/cwirepo cwi
root@testzone:~# pkg info -r sendmail-config
          Name: site/sendmail-config
       Summary: Sendmail configuration
   Description: ConWid sendmail.mc file for Solaris 11, accepts only local
                connections.
      Category: System/Core
         State: Not installed
     Publisher: cwi
       Version: 8.14.9
 Build Release: 1.0
        Branch: None
Packaging Date: March  5, 2015 08:14:22 PM
          Size: 1.59 kB
          FMRI: pkg://cwi/site/sendmail-config@8.14.9,1.0:20150305T201422Z

root@testzone:~#  pkg install site/sendmail-config
           Packages to install:  1
            Services to change:  2
       Create boot environment: No
Create backup boot environment: No
DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                                1/1           2/2      0.0/0.0    0B/s

PHASE                                          ITEMS
Installing new actions                         12/12
Updating package state database                 Done
Updating package cache                           0/0
Updating image state                            Done
Creating fast lookup database                   Done
Updating package cache                           2/2

root@testzone:~# pkg verify  site/sendmail-config
root@testzone:~#

Installation of that package causes several effects. Obviously, the custom sendmail configuration file custom_sm.mc is placed into the directory /etc/mail/sendmail/cf/cf. The sendmail daemon is restarted, automatically expanding that file into a sendmail.cf file and using it. I have noticed that on occasion, it is necessary to refresh and restart the sendmail service.

Conclusion

The result of all of that is an easily maintained configuration file. These concepts can be used with other configuration files, and can be extended to more complex sets of configuration files.

For more information, see these documents:

Acknolwedgements

I appreciate the assistance of Dave Miner, John Beck, and Scott Dickson, who helped me understand the details of these features. However, I am responsible for any errors.

March 06, 2015

Robert MilkowskiNetApp vs. ZFS

March 06, 2015 11:29 GMT
Bryan Cantrill's take on NetApp vs. ZFS.

March 04, 2015

Adam LeventhalOn Blogging (Briefly)

March 04, 2015 23:26 GMT

I gave a presentation today on the methods and reasons of blogging for Delphix Engineering.

One of my points was that presentations make for simple blog posts–practice what you preach!

March 02, 2015

Darryl GoveBuilding old code

March 02, 2015 18:37 GMT

Just been looking at a strange link time error:

ld.so.1: lddstub: fatal: tr/dev/nul: open failed: No such file or directory

I got this compiling C++ code that was expecting one of the old Studio compilers (probably Workshop vintage). The clue to figuring out what was wrong was this warning in the build:

CC: Warning: Option -ptr/dev/nul passed to ld, if ld is invoked, ignored otherwise

What was happening was that the library was building using the long-since-deprecated -ptr option. This specified the location of the template repository in really old versions of the Sun compilers. We've long since moved from using a repository for templates. The deprecation process is that the option gets a warning message for a while, then eventually the compiler stops giving the warning and starts ignoring the option. In this case, however, the option gets passed to the linker, and unfortunately it happens to be a meaningful option for the linker:

        [-p auditlib]   identify audit library to accompany this object

So the linker acts on this, and you end up with the fatal link time error.

February 28, 2015

Mike GerdtsOne image for native zones, kernel zones, ldoms, metal, ...

February 28, 2015 15:14 GMT

In my previous post, I described how to convert a global zone to a non-global zone using a unified archive.  Since then, I've fielded a few questions about whether this same approach can be used to create a master image that is used to install Solaris regardless of virtualization type (including no virtualization).  The answer is: of course!  That was one of the key goals of the project that invented unified archives.

In my earlier example, I was focused on preserving the identity and other aspects of the global zone and knew I had only one place that I planned to deploy it.  Hence, I chose to skip media creation (--exclude-media) and used a recovery archive (-r).  To generate a unified archive of a global zone that is ideal for use as an image for installing to another global zone or native zone, just use a simpler command line.

root@global# archiveadm create /path/to/golden-image.uar

Notice that by using fewer options we get something that is more usable.

What's different about this image compared to the one created in the previous post?

February 27, 2015

Darryl GoveImproper member use error

February 27, 2015 20:47 GMT

I hit this Studio compiler error message, and it took me a few minutes to work out what was going wrong. So I'm writing it up in case anyone else hits it. Consider this code:

typedef struct t
{
   int t1;
   int t2;
} t_t;

struct q
{
   int q1;
   int q2;
};

void main()
{
   struct t v; // Instantiate one structure
   v.q1 = 0;   // Use member of a different structure
}

The C compiler produces this error:

$ cc odd.c
"odd.c", line 16: improper member use: q1
cc: acomp failed for odd.c 

The compiler recognises the structure member, and works out that it's not valid to use it in the context of the other structure - hence the error message. But the error message is not exactly clear about it.