Why NZ needs GDPR

Following on from my previous post, Making a hash of privacy, I want to talk about the communication I’ve had with my bank regarding their use of the Facebook Custom Audience API.

Having looked at how the Custom Audience API worked I asked my bank to provide the following:

  • An explanation of why they uploaded my details to Facebook
  • How they uploaded the information (Custom Audience API, or some other mechanism)
  • The information they uploaded (email address, name, DoB, etc.)
  • The dates the uploads occurred
  • The clause in their terms and conditions that permits the uploading of customer details to Facebook

Their initial response was as follows:

Please know we take the protection of our customers data very seriously and we are happy to answer your questions.

The information you received from Facebook entitled “advertisers_who_uploaded_a_contact_list_with_your_information” refers to a specific advertising option BankX (and other businesses globally) utilise on Facebook called ‘Custom Audiences’.

BankX invests in advertising across many channels like TV, Radio and digital for example Facebook, Google and Trademe. We promote the products and services we offer like Home Loans, Business Banking and Credit Cards across these channels and we optimise them to ensure we are getting as much value as possible from our investment. One way we do this is to create what are known as ‘suppression lists’. For example, if a person is already a Business Banking customer with BankX, then we don’t want to be showing them our Business Banking advertising, as that would be inefficient and also a poor experience for our Business Banking customers. Facebook offers an option to create a Business Banking suppression list which means Facebook doesn’t display any of our advertising to people on Facebook who have been flagged as a Business Banking customer. The following is the process used to create a suppression list:

  1. BankX identifies Business Banking customers in its database and extracts their email addresses. NOTE: It’s important to understand at this point that Facebook has already hashed (encrypted) all the data that users have provided to it in the past.
  2. As BankX uploads the email addresses to Facebook they are hashed locally in the browser before they are uploaded to Facebook. Hashing turns the data into short fingerprints that can’t be reversed. It happens before the data is sent to Facebook, so Facebook doesn’t see the email address, it simply see’s the hashed data.
  3. Once the BankX hashed data is uploaded Facebook then matches the hashed data as best as it can.
  4. The matches are added to a Custom Audience for BankX.
  5. The matched and unmatched hashes are deleted.

The Custom Audience that’s been created doesn’t have any identifiable information, it is simply a ‘suppression list’ that we can utilise so our advertising doesn’t go to those people on Facebook. We don’t know who was matched and we can’t reverse the process and download any information. No one else has access to this data other than BankX, we don’t share the data with anyone.

They also quoted the Custom Audience API terms and conditions:

Facebook will not give access to or information about your Custom Audience to third parties or other advertisers, use your Custom Audience to append to the information we have about our users or build interest-based profiles, or use your Custom Audience except to provide services to you.

If it were true they were only uploading email address there would never have been a match because my banking email address is different from my Facebook email address. I therefore asked them again; have you, on any occasion, uploaded customer details other than email addresses?

The next response came in the form of a phone call from BankX’s Technical Marketing Lead. To summarise:

  • The assertion they had only uploaded email addresses was incorrect. They actually achieved a 92% success rate by uploading
    • Email address
    • First name
    • Last name
    • Date of birth year
    • Gender
  • They couldn’t tell me how many times my data had been uploaded, or the suppression lists I had been added to, as they kept no audit record.
  • Finally, in response to my complaint they would look at:
    • moving the uploading of customer data to Salesforce Marketing Cloud as this integrates directly with the Facebook Custom Audience API, and
    • giving customers the ability to opt out of having their details shared with third parties

With regards to whether their terms and conditions permitted the uploading customer details to Facebook they referred to a loosely worded clause that permits them to share my personal details with any third party “for the purposes of managing the customer’s relationship with us”. In essence, carte blanch to do anything they like with my data.

The whole experience has left me disappointed. Disappointed my bank thought it was OK to upload my personal details to Facebook. Disappointed they actually trust Facebook’s terms and conditions, Disappointed they failed to understand that hashing does not equal anonymity or how trivial it is to decode fields with such a small range of possible values. I’m also dubious they’re telling the whole truth. Are the fields listed above the only fields they’re uploading? With no audit record I’ll never know.

Conclusion

I have no sympathy for businesses engaged in this kind of activity. I don’t care if it’s the wild west out there, businesses have a moral obligation to their customers. It’s not good enough to say “everyone else is doing it, so it must be OK”. Behave more responsibly with your customers’ personal data if you don’t want to be judged in the court of public opinion.

Whilst I was clearly naive in expecting higher standards from my bank, there are reasons to be hopeful. With the recent Cambridge Analytica scandal and now GDPR, there’s never been a better time to debate the usage and ownership of personal data in New Zealand. Let’s have the debate now and legislate to protect consumers.

Resources

Advertisements

Making a hash of customer privacy

Over the past couple of weeks I’ve been untangling myself from Facebook. For the most part this involved writing a lot of emails and messages to ensure I had the correct contact details for everyone. As soon as I confirmed someone’s details I disconnected and deleted any messages. Once this process was complete, and I’d deleted everything from my activity stream, I decided to download my data one last time.

As luck would have it, Facebook recently broadened the scope of what’s included in the “your information” download. Look inside the “ads” folder and you’ll find a list entitled

“Advertisers who’ve uploaded a contact list with your information
Advertisers who run ads using a contact list they’ve uploaded which includes contact info that you’ve shared with them or with one of their data partners”

I couldn’t quite believe my eyes when I looked at the contents. Amongst the list of companies who’d uploaded customer details to Facebook was my bank. My initial reaction was to make a formal complaint but before doing so I decided to do a little digging. How had my bank uploaded my details and what information might they have shared?

Facebook defines two mechanisms for controlling who sees an advertisement; targeting and audiences. Targeting is based on user attributes such as demographics and location. Audiences are based on customer data. Businesses upload their customer data to Facebook via the Custom Audience API. Facebook then matches this customer data with users in order to target a business’ customers.

To increase the likelihood finding a match, businesses can upload the following information:

  • Email address
  • Phone number
  • Gender (m/f)
  • Date of birth year (1900-present)
  • Date of birth month (01-02)
  • Date of birth day (01-31)
  • First name & last name (a-z, lowercase, UTF8)
  • City (a-z lower case, no special characters, punctuation or white space)
  • Postcode (lowercase)
  • Country (ISO 3166 2 letter country code)
  • Mobile Advertiser ID

The API documentation includes the following statement:

“To create audiences, share your data in an hashed format to maintain privacy. Facebook compares this with our hashed data to see if we should add someone on Facebook to your ad’s audience.”

The US Federal Trade Commission clearly states that hashing is not a secure method for anonymising data. Hashing fields with a ridiculously small number of possible values is nothing more than window dressing, and Facebook know this. Even names don’t require a massive dataset. For example, a rainbow table containing ~128,000 entries would suffice to identify 90% of surnames in the US, and we know that Facebook, with upwards of 2 billion accounts, already has this information. Email addresses are equally vulnerable and can be decoded in less than 500ms for as little as 4c.

This kind of activity is a betrayal of trust. Any business uploading customer details to a social media platform is wilfully leaking that data. Naivety is no excuse. I have therefore lodged a formal complaint with my bank and requested full disclosure of the information they submitted to Facebook. There is, however, a distinct possibility they won’t be able to provide this information as I very much doubt they have an audit record for every call they made to the Customer Audience API. Regardless, I’ll be following this up with New Zealand Privacy Commissioner.

More details to follow.

Facebook Countermeasures

The recent spate of revelations surrounding data leaks, subversion of the democratic process, and general disregard for user privacy were the last straw. It’s time to say goodbye to Facebook. However, not everyone feels so strongly and cutting off a channel of communication as ubiquitous as Facebook can be challenging. So what can you do to protect yourself?

Note: The web browser sections below are Firefox-specific.

Behavioural Changes

Don’t use Facebook Mobile applications

We put a huge amount of trust in mobile application developers. More often than not they have access to our phone’s location, camera, microphone, storage, contact list, etc. etc. Unfortunately, unsurprisingly – however you view it – Facebook cannot be trusted. For example, the Android version of Messenger was recently caught harvesting user’s contact lists, call history, and SMS metadata.

Lock down your profile

Applying the most restrictive access settings to your profile and posts might sound obvious but it’s surprising how many people don’t do it. This includes limiting who can look you up via your email or phone number (never give Facebook your phone number). The ease with which these features can be abused was first revealed back in 2015. Fast forward to 2018 and Facebook has finally admitted that upwards of two billion accounts have been scraped.

Regularly review your privacy settings

Facebook has form when it comes to enabling invasive new features by default. A case in point being facial recognition, which was enabled by default when introduced back in 2011. Opt out of these changes by regularly reviewing your privacy settings.

Don’t sign into 3rd-party websites using Facebook login

When you log into a 3rd-party site using Facebook Login you’re giving that site permission to access your profile information. However, it doesn’t stop there; you’re also giving access to any malicious 3rd-party JavaScript embedded in that site.

For more information on Facebook Login vulnerabilities, see:

Note: If you disable Facebook Platform (see below) you’ll no longer be able to log into 3rd-party sites using Facebook Login.

Disable Facebook Platform

Facebook Platform provides a set of application programming interfaces (APIs) that give third-party developers access to your data. This is exactly how Cambridge Analytica got access to the user data of over 87 million people, and that’s just the tip of the iceberg. If you have “Apps, websites and games” turned on, you’re putting yourself (and your fiends) at serious risk.

The Electronic Frontier Foundation have provided straightforward instructions on how to opt out of Platform API sharing.

Don’t ‘like’ stuff

If someone offered you a service but in exchange you had to tell them your sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender would you accept? Chances are you wouldn’t. A 2013 study of 58,000 volunteers showed how your Likes reveal all of the above with frightening accuracy.

Don’t click on LINKS

This isn’t limited to advertisements, as a general rule you should never follow links to external sites while logged into Facebook. This is really important because you actually have two profiles; a Facebook profile and a Data Management Platform (DMP) profile.

For more information on how online advertising works see:

If you’re looking for a trustworthy ad-blocker, uBlock Origin is a great choice. I say trustworthy because there are a number of so-called ad-blockers that actually accept payments from advertisers in exchange for white-listing their sites.

Don’t upload media with embedded EXIF metadata

Exif (Exchangeable image file format) defines a standard for embedding metadata in image and sound files. This metadata can include your device’s serial number and GPS coordinates. Serial numbers are important because they identify every other photo taken by a particular device. Exif metadata can also be fed into specialised search engines, including Google Image Search.

The power of Exif metadata is exemplified by the story of Higinio O. Ochoa III, an alleged Anonymous hacker from Texas (see also A Picture is Worth a Thousand Words, Including Your Location, by the Electronic Frontier Foundation).

If you have an Android phone ObscuraCam is a good option for stripping Exif metadata before uploading to Facebook.

Isolate your usage of Facebook.com

Facebook recently confirmed what most people already knew; namely that it tracks and profiles users and non-users alike. You can minimise your exposure to tracking in one of two ways:

Option 1: Install the ‘Facebook Container’ add-on

Firefox Containers facilitate the segregation of site data by giving each container its own cache, cookie storage, indexeddb, and localStorage. Containers were initially only available in Firefox Nightly. In September 2017 they became widely available via the Firefox Multi-Account Containers add-on. In March 2018 Mozilla released Facebook Container – a container-based add-on designed to isolate your web activity from Facebook.

For more information see:

Option 2: Use a dedicated browser profile

If you use Firefox, you already have a default profile. It’s where Firefox stores your history, bookmarks, installed add-ons, saved passwords, etc. Profiles also have their own cache, cookie storage, indexeddb, and localStorage. For all intents and purposes, a profile is a completely separate browser. You can see information about your current profile(s) by typing ‘about:profiles’ in the Firefox address bar.

The best way to fully isolate Facebook from your general day-to-day browsing is to create a new profile whose sole purpose is accessing Facebook. Information on adding and removing profiles can be found here.

Now that you’re no longer using your default profile for accessing Facebook, you should block all Facebook domains and cookies in that profile.

When it comes to domain blocking in the browser my go-to tool is uMatrix. From a privacy perspective uMatrix is ideal because it actually blocks requests to blacklisted domains. On the flip side it’s not the most user-friendly for non-technical users. At the time of writing, the following rules should suffice:

* facebook.com * block
* facebook.com.edgekey.net * block
* facebook.com.edgesuite.net * block
* facebook.net * block
* facebook.net.edgekey.net * block
* facebook-web-clients.appspot.com * block
* fb.com * block
* fb.me * block
* fbcdn.com * block
* fbcdn.net * block
* fbsbx.com * block
* fbsbx.com.online-metrix.net * block
* m.me * block
* messenger.com * block
* tfbnw.net * block

Information on adding uMatrix rules can be found on the uMatrix Wiki.

Non-Facebook users

If you don’t have a Facebook account, or you’ve deleted it, and are technically inclined, you can attempt to block Facebook at the network level:

  • Block all known Facebook domains at the router or in your computer’s host file.
  • Get hold of a Raspberry Pi and install Pi-hole, preferably in conjunction with DNSCrypt.
  • Use a filtering proxy such as Privoxy (P.S. never download anything from SourceForge as there have been numerous instances of malware being bundled with SourceForge downloads).

It should be noted that blocking Facebook at the network level isn’t foolproof. New domains create a constantly moving target and applications can always bypass DNS-based blockers by using an IP address.

Additional information

Setting up a Local p2 Repository

I’m currently working on an Eclipse RCP application that uses PAX Logging. The easiest way to use the required bundles is to put them in a folder referenced by your Target Platform Definition. However, there are a couple of major drawbacks to this approach:

  1. Tycho requires p2 metadata in order to resolve dependencies, so the local folder approach won’t work if you’re using Tycho for a headless build
  2. Log4j Import-Package failures. Every time I started Eclipse I had to reset my target platform because Import-Package: org.apache.log4j;version="1.2.15";provider=paxlogging was failing

Based on the Equinox documentation I created the following ant build file to publish a local p2 repository. The only prerequisite is that all the required PAX Logging bundles are placed in a folder, relative to the build file, called lib/plugins (see p2-build.xml in-line comments). Once you’ve created your repository you can use your favourite web server to make it available to Tycho.

p2-build.xml

<?xml version="1.0" encoding="UTF-8"?>
<project name="local-p2" default="create-p2" basedir=".">

    <!-- 3rd party bundles should be placed in
         a subdirectory called 'plugins' -->
    <property name="source.dir" location="${basedir}/lib" />
    <!-- the directory the repository will be
         created in -->
    <property name="repo.dir" value="${basedir}/repository" />

    <target name="clean">
        <delete dir="${repo.dir}" />
        <mkdir dir="${repo.dir}" />
    </target>

    <target name="create-p2" depends="clean">
        
        <makeurl file="${repo.dir}" property="repo.url" />
        <echo message="Repository URL: ${repo.url}"/>
        <makeurl file="${basedir}/category.xml" property="category.file.url" />

        <!-- Use a fileset include to avoid hard-coding
             the equinox launcher jar filename -->
        <pathconvert property="launcher.jar">
            <fileset dir="${eclipse.home}/plugins/">
                <include name="org.eclipse.equinox.launcher_*.jar" />
            </fileset>
        </pathconvert>
        <echo message="Using Equinox launcher: ${launcher.jar}"/>

        <!-- Assumes 3rd party bundles are located in
             ${source.dir}/plugins -->
        <p2.publish.featuresAndBundles
            repository="${repo.url}"
            publishArtifacts="true"
            compress="false"
            source="${source.dir}" />

        <!-- See category.xml -->
        <exec executable="java">
            <arg value="-jar" />
            <arg value="${launcher.jar}" />
            <arg value="-console" />
            <arg value="-consolelog" />
            <arg value="-application" />
            <arg value="org.eclipse.equinox.p2.publisher.CategoryPublisher" />
            <arg value="-metadataRepository" />
            <arg value="${repo.url}" />
            <arg value="-categoryDefinition" />
            <arg value="${category.file.url}" />
            <arg value="-categoryQualifier" />
        </exec>
    </target>

</project>

category.xml

<?xml version="1.0" encoding="UTF-8"?>
<site>
    <bundle id="org.ops4j.pax.configmanager" version="0.2.2">
        <category name="ops4j" />
    </bundle>
   <bundle id="org.ops4j.pax.logging.pax-logging-api" version="1.7.3">
      <category name="ops4j"/>
   </bundle>
   <bundle id="org.ops4j.pax.logging.pax-logging-service" version="1.7.3">
      <category name="ops4j"/>
   </bundle>
   <category-def name="ops4j" label="ops4j"/>
</site>

Ubuntu Virtual Machine Setup

To successfully build an e4 RCP application for Mac OX S you need a Linux environment. The following instructions are a record of how I set up Ubuntu 14.04 on VirtualBox 4.3.12

Prerequisites

  • Windows 7 Professional 64-bit
  • VirtualBox 4.3.12
  • ubuntu-14.04.1-desktop-amd64.iso

Creating an Ubuntu VM on VirtualBox

  1. Open ‘Oracle VM VirtualBox Manager’ and create a new virtual machine with the following attributes:
    • Type: Linux
    • Version: Ubuntu (64bit)
    • RAM: 2048 MB
    • Hard drive: VDI, Dynamically Allocated, 10 GB
  2. Start the new VM and select ubuntu-14.04.1-desktop-amd64.iso as the installation image
  3. Follow the onscreen instructions to install Ubuntu
  4. Shutdown the VM and set the following display properties in the video tab:
    • Video Memory: 128 MB
    • Enable 3D Acceleration: selected
  5. Start Ubuntu and run the following command:
    sudo apt-get install virtualbox-guest-dkms virtualbox-guest-utils virtualbox-guest-x11

    (see Screen Resolution Problem with Ubuntu 14.04 and VirtualBox for more information on why this is required)

  6. Restart the VM

Dev Environment Setup

JDK 1.8

  1. To install JDK 1.8 run the following commands:
    sudo add-apt-repository ppa:webupd8team/java
    sudo apt-get update
    sudo apt-get install oracle-java8-installer

Eclipse for RCP and RAP Developers

  1. Download the latest Linux 64-bit version of ‘Eclipse for RCP and RAP Developers’ (currently Luna RC3) and extract it to a suitable location
  2. Create an eclipse.desktop file by running the following command:
    sudo -H gedit /usr/share/applications/eclipse.desktop

    Add the following content, replacing ‘path-to-eclipse-executable’ with the appropriate value:

    [Desktop Entry] 
    Type=Application 
    Name=Eclipse 
    Icon=eclipse 
    Exec=env UBUNTU_MENUPROXY= path-to-eclipse-executable 
    Terminal=false 
    Categories=Development;IDE;Java;
  3. Install e4 Tools
    1. Go to the eclipse e4 project downloads page
    2. Select the latest release (currently 1.6)
    3. Copy the ‘online p2 repo link’ URL
    4. Start eclipse and create a new ‘software update site’ using the URL from the previous step
    5. Install the ‘Eclipse 4 – Core Tools’ feature
  4. Install EGit via the ‘Help – Eclipse Marketplace’ menu item

Git

  1. Run the following command to install Git:
    sudo apt-get install git

    (see Getting Started – Installing Git for more information)

  2. For a graphical front end, download and install SmartGit

Additional Ubuntu Configuration

Shared Folder(s)

  1. Create a new folder on the Windows host
  2. Open ‘Oracle VM VirtualBox Manager’, select ‘Settings – Shared Folders’ and click the ‘Add’ button
  3. Select the newly created folder by selecting ‘Other’ from the ‘Folder Path’ dropdown list
  4. Check the ‘Auto-mount’ and ‘Make Permanent’ options and click ‘OK’
  5. Run the following command in Ubuntu, replacing ‘your-user-name’ with the appropriate value:
    sudo adduser your-user-name vboxsf
  6. Restart Ubuntu and confirm the folder is accessible by running:
    ls /media/

Auto-hide the Launcher

  1. Select ‘System Settings – Appearance – Behaviour’ to auto-hide the launcher
  2. If the launcher doesn’t reappear when you move the mouse to the designated area simply press ALT+F1 or the Windows key to toggle launcher visibility

Changing the Default Search Categories and Sources

  1. Select ‘System Settings – Security & Privacy – Search’ to exclude online search results
  2. Install dconf Editor by running the following command:
    sudo apt-get install dconf-tool
  3. Open dconf Editor and select ‘com – canonical – unity – lenses’
  4. Add/remove any required/unwanted scopes (see How to get the list of Dash search plugins (scopes) in command line? for more information).

Generating HTML tables with XSLT

The following code snippet is a relatively simple and reusable implementation for generating HTML table content.

    <!-- The number of columns in the generated table -->
    <xsl:variable name="nColumns" select="3" />

    <!--
        XPath indexes start at 1, not 0. 'row' and 'column' template
        parameters therefore default to 1
    -->
    
    <xsl:template name="tableRows">
        <!-- The table content node list -->
        <xsl:param name="items" />
        <!-- The current row index -->
        <xsl:param name="row" select="1" />
        <!--
            Calculate the total number of rows based on the number of
            items and the number of columns
        -->
        <xsl:variable name="nRows" select="ceiling(count($items) div $nColumns)"></xsl:variable>
        <xsl:element name="tr">
            <xsl:call-template name="tableColumns">
                <xsl:with-param name="items" select="$items" />
                <xsl:with-param name="row" select="$row" />
            </xsl:call-template>
        </xsl:element>
         <!--
              There's no loop construct in XSLT so we simply increment
              the row index and call the template again if the current
              row index is less than the total number of rows
         -->
        <xsl:if test="$nRows > $row">
            <xsl:call-template name="tableRows">
                <xsl:with-param name="items" select="$items" />
                <xsl:with-param name="row" select="$row + 1" />
            </xsl:call-template>
        </xsl:if>
    </xsl:template>

    <xsl:template name="tableColumns">
        <!-- The table content node list -->
        <xsl:param name="items" />
        <!-- The current row index -->
        <xsl:param name="row" />
        <!-- The current column index -->
        <xsl:param name="column" select="1" />
        <!--
            Calculate the item index based on the current row and
            column index
        -->
        <xsl:variable name="itemIndex" select="(($row - 1) * $nColumns) + $column" />
        <xsl:element name="td">
            <!-- Check the item index is 'in bounds' -->
            <xsl:if test="count($items) >= $itemIndex">
                <xsl:call-template name="tableCellContent">
                    <xsl:with-param name="item" select="$items[$itemIndex]" />
                </xsl:call-template>
            </xsl:if>
        </xsl:element>
         <!--
              There's no loop construct in XSLT so we simply increment
              the column index and call the template again if the current
              column index is less than the specified number of columns
         -->
        <xsl:if test="$nColumns > $column">
            <xsl:call-template name="tableColumns">
                <xsl:with-param name="items" select="$items" />
                <xsl:with-param name="row" select="$row" />
                <xsl:with-param name="column" select="$column + 1" />
            </xsl:call-template>
        </xsl:if>
    </xsl:template>

    <xsl:template name="tableCellContent">
        <xsl:param name="item" />
        <!-- Generate content for the current item -->
        ...
    </xsl:template>

Eclipse 4.4.0 – Tool bar item visibility based on the currently active perspective

Displaying trimmed window tool bar items based on the currently active perspective in an Eclipse 4.4.0 RCP application would seem like a straightforward task but it actually involves a fair bit of tinkering and there are a few not so obvious pitfalls along the way.

After reading section 31.2 of Lars Vogel’s Eclipse 4 RCP tutorial the obvious approach was to associate a ‘visible-when’ expression with each of my perspective-specific tool bar items. The first problem I encountered was the fact that expressions are not evaluated for items added directly to the main tool bar (see Bug 400217). By ‘added directly’ I mean added as children of the tool bar element in the e4xmi application model. Luckily, when added as ToolBar Contributions visible-when expressions are evaluated as expected.

The next problem I encountered was how to determine the currently active perspective from within a visible-when expression. I couldn’t find any up to date documentation regarding the names of predefined context variables and none of the variables listed in Command Core Expressions were available in my Eclipse Luna 4.4.0 RCP application. To determine which context variables were available I added the following @CanExecute method to one of my tool bar item command handlers:

@CanExecute
public boolean canExecute(final IEclipseContext ictx) {
    final EclipseContext ctx = (EclipseContext) ictx.getParent();
    System.out.println("### START ###");
    for (final Entry<String, Object> entry : ctx.localData().entrySet()) {
        System.out.println(String.format("Key: '%s', value: '%s'", entry.getKey(), entry.getValue()));
    }
    System.out.println("### END ###");
    return true;
}

The following entry was included in the output from this method:

 Key: 'org.eclipse.e4.ui.model.application.ui.advanced.MPerspective', value: 'org.eclipse.e4.ui.model.application.ui.advanced.impl.PerspectiveImpl@2d778add (elementId: my.example.perspective.Edit, tags: [], contributorURI: platform:/plugin/my.example.application) (widget: Composite {}, renderer: org.eclipse.e4.ui.workbench.renderers.swt.PerspectiveRenderer@7fc44dec, toBeRendered: true, onTop: false, visible: true, containerData: null, accessibilityPhrase: null) (label: Edit, iconURI: null, tooltip: , context: PerspectiveImpl (my.example.perspective.Edit) Context, variables: [])'

Based on this output I wrote the following following Property Tester to query the elementId of the active perspective:

/**
 * Property tester that checks the <code>elementId</code> of the currently active perspective
 */
public class PerspectivePropertyTester extends PropertyTester {

    /**
     * @param receiver the currently active {@link MPerspective}
     * @param property the property to test, in this case 'elementId'
     * @param args additional arguments, in this case an empty array
     * @param expectedValue the expected value of {@link MPerspective#getElementId()}
     */
    @Override
    public boolean test(final Object receiver, final String property, final Object[] args, final Object expectedValue) {
        final MPerspective perspective = (MPerspective) receiver;
        return perspective.getElementId().equals(expectedValue);
    }
}

This was then configured in plugin.xml as follows:

<?xml version="1.0" encoding="UTF-8"?>
<plugin>
   ...
   <extension point="org.eclipse.core.expressions.propertyTesters">
      <propertyTester class="my.example.application.propertytester.PerspectivePropertyTester"
            id="my.example.application.propertytester.PerspectivePropertyTester"
            namespace="my.example.property"
            properties="perspectiveId"
            type="org.eclipse.e4.ui.model.application.ui.advanced.MPerspective">
      </propertyTester>
   </extension>
   <extension point="org.eclipse.core.expressions.definitions">
      <definition id="my.example.expression.isEditPerspective">
         <with variable="org.eclipse.e4.ui.model.application.ui.advanced.MPerspective">
            <test forcePluginActivation="true"
                  property="my.example.property.perspectiveId"
                  value="my.example.perspective.Edit">
            </test>
         </with>
      </definition>
      <definition id="my.example.expression.isPreviewPerspective">
         <with variable="org.eclipse.e4.ui.model.application.ui.advanced.MPerspective">
            <test forcePluginActivation="true"
                  property="my.example.property.perspectiveId"
                  value="my.example.perspective.Preview">
            </test>
         </with>
      </definition>
   </extension>
   ...
</plugin>

As a footnote I should also note that subscribing to UIEvents.UILifeCycle.PERSPECTIVE_OPENED events does not work as expected (see Bug 408681) so the above approach is probably the best option. You could, of course, set the visibility of tool bar items programmatically when switching perspective but this is far from ideal.