Category Archives: Beginner

Working with the SKY Query API in Chimpegration

Blackbaud recently added the SKY Query API for Raiser’s Edge and Financial Edge. At first I was not really sure how this would be of benefit to our products. We have worked with Lists in Raiser’s Edge and on the database view we have generated static queries of records that have been processed. I never thought that we had a need for adding criteria to queries.

Now that I have seen the new Query API, I have been inspired.

How have we got around the lack of a query API so far?

In Chimpegration we push data from Raiser’s Edge to Mailchimp. In order to decide which records to push, we let the user select an NXT list that has previously been created. There are some issues with this.

  • Firstly, the list selection criteria is limited. The user cannot, for example, specify that a constituent with no email address or a blank email address should be ignored.
  • The list functionality does not allow you to choose specific output fields. We offer a limited range of fields that we think that the user may want to export.
  • The lists are static. If you want to update them, you can, but this prevents data being pushed to Mailchimp according to a schedule. (There are some workaround involving Queue but these are awkward).

How does the Query API solve this?

The query API allows you to programmatically list all queries and to load one of them in particular. You can then run the query and fetch the results. This makes it fully dynamic. Scheduled data exports to Mailchimp would be up to date with the latest information.

The user can choose the output data. Whereas previously they have been limited to the areas we offer the user, now they can choose any value available to them in a query. Most organisations won’t want to push a membership attribute or a gift notepad to Mailchimp but there is bound to be one out there that wants to do something like that or some other option that we had not considered. With the full range of output fields they are no longer restricted to what we offer them.

The same goes for criteria. Previously the user was restricted to the fields available in lists. Now the whole range of query fields can be a part of the criteria. If an organisation only wants to export constituents attending a specific event with a large t-shirt size, now they can!

What else can we do with Query API?

In Chimpegration Classic and in Importacular we look up constituents with criteria sets. It has been much simpler to look up constituents with a wide range of criteria. There is some scope for this based on the constituent list API endpoint. However, the Query API really gives this some muscle.

At first I thought that, even if we could do it, it would not be practical to create a query each time the user wanted to use criteria to search. We would end up creating a lot of queries saved in Raiser’s Edge. It is not certain that the user would have rights to delete a query. It just felt wrong.

However, the Query API includes the ability to generate a query on the fly. When including the filter, output fields and sort values in the json payload, the query can be run without explicitly saving it to the organisation’s environment. This is a real game changer.

The SKY API documentation does suggest that this should not be used instead of the constituent list and constituent search endpoints as they are optimised for search. However being able to search on more obscure areas of Raiser’s Edge certainly adds a lot of power to look up records that was previously missing.

When are we releasing the new Chimpegration functionality?

Update: This has now been released!

We are actively developing this functionality. However the Query API is still in preview so it is uncertain when this will be released. We may release it also with the caveat that our functionality is in preview and may break at any time (due to changes in the Query API). This is a matter of a few weeks though so watch this space for a demo!

Importacular and Regular Expression Transformations

Some Preamble

Regular Expression are really complicated. Even now I find it difficult to get my head around them. If you are new to them check out these two sites:

https://www.regular-expressions.info/ – a great tutorial and reference

https://regexr.com/ – a really good “playground” for testing your regular expressions

Overview

Importacular offers the user the ability to transform incoming data from one value into another. When we first started out this was simply a “from” value and a “to” value and if the incoming value matched the “from” it would change it to the “to”. That was very simple but effective. We soon realised that more power was needed so Importacular added partial matches or word matches (and clarified that the original was an “exact” match).

We also then added different replacement types too. These were “Complete” and “Partial” and later “Append” and “Prepend. If you selected “Complete” then all of the incoming value was replaced with the replacement value. If you selected “Partial” then only the matched part would be replaced keeping the remainder of the original value. “Append” and “Prepend” would add the replacement text to the end or the beginning of the original respectively.

Then we added RegEx – firstly for matching and then for replacing. The rest of this post describes how that works.

Matching

Importacular loops through each row in the data transformation grid and continues through each row unless the stop processing flag has been set.

If you choose a match type of RegEx you can put your RegEx in the “From Source” cell and Importacular will try and match on it. For example if you use this very simple RegEx:

.*

Importacular will match on any number of any character i.e. it will always match on what is found.

If you use this RegEx:

B[a-z]g

It will match on “Big”, “Bog”, “Bag” and also “Bkg” (as well as every letter from a to z).

If it finds a match it will try to replace the value

Replacing

When you replace using RegEx there are two thing to note.

  1. It does not matter how the match was made. It could be a RegEx, a complete, a partial or a word match. Replace is independent of how the match was made.
  2. Importacular does not use the classic replace mechanism of RegEx i.e. create a capture group often using parenthesis or sometime slash and parenthesis and then reference that group with a dollar e.g. $1 or $2. Importacular does not use this method!

Importacular’s replace works like this. It takes the incoming value and applies the regular expression to it in order to extract a value. That value is then used as the replacement text. For example if the incoming value is:

2022 Annual Appeal

We can extract the year by using the regular expression:

^20[0-9][0-9]

(Note that there are a number of different ways you could get the same information out using a RegEx. This is just one of them)

Say I have a US phone number and I want to get the area code. The phone number is in two different formats e.g. (415)-123-456 or 415-123-456. I can extract the area code using the following:

(?<=()[0-9]{3}|^[0-9]{3}

If I want to be really clever, I can use a second row in my transformation to transform the area code into the city. In this case after extracting “415” I would transform it to San Francisco.

Conclusion

The hardest part of using regular expressions in Importacular really is the regular expression itself. I won’t try to convince you otherwise. Hopefully this post will make it easier to use those regular expressions once you have determined what you need. Use the RegExr site (link at the top of this post) to test your matching and replace extraction before you put it into the transformation grid. Once in the transformation grid you can also check the review screen to see if it has worked as the resulting value will show up there transformed if everything has worked as expected.

Audit Trail Cloud

I have not posted for a while (in case you missed it there was a global pandemic). However I have saved myself for a great announcement. As you have perhaps read, we are about to release Audit Trail Cloud.

Where did it come from?

Audit Trail Cloud is based on the concept we introduced when we developed Audit Trail Professional many years ago. Back then Raiser’s Edge was only available in what is now called the database view. It made use of VBA (Visual Basic for Applications). Whenever a record was opened, AT Pro would take a snapshot of the field values and when it was saved it would compare the changes, saving the difference to the database.

When organisations moved to Blackbaud hosting they lost the ability to make use of AT Pro. We were not allowed to use VBA on the hosted platform. Many of our clients were sad to lose such a great application others were shocked that they would not be able to take it with them. All of these clients and more were hankering for an Audit Trail that worked with NXT and the SKY API.

So how is Audit Trail Cloud Different?

We had a number of challenges when approaching Audit Trail Cloud.

In the beginning we could not do it

In the beginning we just could not do it. There was no simple way of knowing if a record had changed. Of course we could poll RE NXT to see which records had changed recently but that was not really a viable solution. 

Later on we could do it… just differently

Along came webhooks. Webhooks told us when a change was made. This was just what we were waiting for. However, webhooks did not tell us exactly what had changed and we did not know what the previous value was. To get around this, during the setup, we take an initial snapshot of those fields that we are expecting to receive by way of webhooks. We retrieve and store a baseline set of data so that we know the value of a record before it has been changed. At the time of writing this there are a limited number of webhooks, so we are not downloading the whole database. The areas covered at present include biographical, address, contact records and gifts. We track changes for some other areas but cannot extract the baseline data sets easily.

Could be awkward

A further consideration is that we do not want to be responsible for your data. Everybody knows how awkward a data breach can be. Lots of red faces all around. But worse is the fact that when data is compromised responsibility lies with the vendor. As a small company this is not a liability we were prepared to take on. We decided to give you full control of your data. Or least farm off liability to you and to another company that are much better placed than we are to handle security. All your data is stored in the cloud with AWS (Amazon Web Services). It is locked down from us. Unless you give us the password, we cannot access it.

But wait… there’s more

One great feature that we were definitely not expecting was the breadth to which the changes are covered. While at the moment there are  limited number of areas and fields, it seems as though they are covered in a lot more places. 

We thought that doing an NXT version of Audit Trail would only capture changes in NXT. However it also captures changes made in the database view. As well as that, whereas with Audit Trail Pro we had to implement a workaround to capture global changes, with Audit Trail Cloud those changes are automatically captured in the same way as any other change.

Viewing Records

In Audit Trail Pro we had the Audit Viewer. This was a grid where the changes were shown. You could filter the changes by date, record area and field. We have reproduced this, less the dour Winforms look of the early noughties.

We have also added a constituent tile. Going on to a constituent record, you can view the changes for that one record and see how it has been edited over time.

What do we see in the future?

So far we have been limited to the webhooks that Blackbaud have released. We are told there are more on the way, so as soon as they are released we will add them to our arsenal. Beyond that we hope to be able to add a revert option so that you can undo erroneous changes. We are also adding tiles for other records that have their own NXT space, such as gifts and other areas when they are released, such as actions.

One piece of functionality that we felt was essential but is not included in this first version is a record of the user who made the change. The webhooks just do not give us this information. We are told this is coming imminently so this is our number one priority as soon as it is released.

How do I find out more?

You mean this has not been enough information for you? Well you are in luck. Take a look at our webpage:

Audit Trail Cloud – Zeidman Development

Or sign up to a webinar about Audit Trail Cloud

Audit Trail Cloud Demo (clickmeeting.com)

SKY API and Postman

The SKY API documentation is very good compared to many APIs that I work with. One area that is particularly useful is the “Try It” area where you can test an endpoint with your own data. One small annoyance is that if you want to try a number of different endpoints, you need to go through the whole oAuth2 process each time.

Using Postman allows you to avoid this and also take advantage of a number of other features of that application. One nice feature is the ability to generate access tokens with ease so that they do not need to be refreshed each time you run an endpoint. (You will still need to refresh them after the allotted 60 minute life but that should give you plenty of time to run a number of endpoints)

This post shows you how to set up Postman for oAuth2 in the simplest way possible. I am not an expert in Postman and there may well be things that I have missed that could make the process even easier. Let me know in the comments if you do something differently!

Setup your application

It is probably wise to have a separate Blackbaud application for Postman rather than using an application that you use in production.

This is my very basic app. For Postman you do not need a live redirect URI but you do need one as the SKY OAuth2 process requires that the value you send in matches a value on your app.

Setup Postman

In my postman I have a collection of Sky API endpoint calls. You are able to add authorisation details to the main folder and have each endpoint inherit the credentials from that rather than having to enter them each time.

When I click on the Sky API link I can enter my credentials for the whole folder.

Hopefully most of the values are self-explanatory. You should start with the section “Configure New Token”.

Token Name: Just give your token a name so that you recognise it.

Grant Type: Authorization Code

Callback URL: This is one of the values that you have in your SKY app

Auth URL: https://oauth2.sky.blackbaud.com/authorization

Access Token URL: https://oauth2.sky.blackbaud.com/token

Client ID: This is the Application Id from your SKY app

Client Secret: From your SKY app

Scope: This is not currently used by the SKY API

State: You could put a value here but it has no real use in the context of Postman

Client Authentication: Send as Basic Auth Header

Press the “Get New Access Token” button. This will prompt you to log into Blackbaud and go through the OAuth2 process. It will then save a token in the Current Token area. This is then used by your calls.

Setup an individual endpoint call

Now that you have set up authorisation, you can proceed to try a call. You will still need to add your subscription id to the header as shown. (If anybody knows of a way to add that value on the folder level let me know!)

Put the URL in the address box and change the authorisation to inherit from parent as shown below

On the headers tab, add the bb-api-subscription-key

This can be found on your developer account here: https://developer.blackbaud.com/subscriptions/

Then you are ready to press the Send button to retrieve data from SKY API

BBCon 2020 – Session Recommendations

This year will be a very strange BBCon. Gone is the flight over, the anticipation, hoards of people all in one place, the discussion over the food, the inevitable “bacon” joke (does that joke ever get old?), the jet lag and of course, all that swag.

One thing still there though are the quality speakers and sessions. As with each year I am giving a round up of the most interesting speakers and anticipated sessions. These are all my personal opinion which is, of course, from a more technical perspective so if I have left anybody off the list that justifiably should be on then I apologise in advance.

So, in no particular order….

Utilize the Power of OData in Blackbaud Altru® to create Valuable Dashboards in Microsoft® Excel and Power BI – Carly Meek.

We make use of OData in Chimpegration Cloud for Altru. From a technical perspective it is one of the most important distinctions between the Altru development platform (Infinity) and the SKY API. OData allows any query that has been put together in Altru to be “streamed” to other platforms, in our case, Chimpegration but importantly applications such as Excel, Power BI and other analytic applications. Definitely one to watch if you need access to your data in a more advanced setting.

Nonprofit Analytics: How To Build Financial and Fundraising Dashboards – Thomas A. Evans and Linton Myers

This session is a great follow up to the session that I did last year with Graham Getty (From Crystal to Cloud. See the live demo part here). Where Graham and I looked at ways in which you could replace and enhance legacy functionality and move to the cloud, Thomas and Linton take another look at how those capabilities have changed and added some that Graham and I may well have missed. Definitely worth taking a look at.

Get Your Head in the Cloud – Eric Wand

As developers we sometimes forget life before the Cloud. The ease with which you can reach all your resources wherever you may be is something that web applications strive to help customers with. And yet I still find resonance with Eric’s opening sentence: “Remember when the internet felt expensive, untrustworthy and complicated?”. When I compare developing a plugin to developing a cloud solution this still rings true. If I still get nostalgia over desktop applications then surely less than technical users may well feel the same. This session is sure to convince you otherwise.

Open a world of possibilities with SKY Developer – Stu Hawkins and Ben Wong

I have seen some of the customisations that Stu and his team have produced and I am regularly impressed. The SKY API platform has come a long way from the early days and there is now so much more functionality available to developers to customise RE NXT. It will certainly be fascinating to see what new and exciting utilities Stu has to offer. If you are new to developing or want to get started make sure that you take a look at this session and quiz Ben who has all the answers!

Back by Popular Demand: NXT Tips and Tricks – Lisa Nurminen and Jarod Bonino

This session is becoming an annual favourite. Having missed it at BBCon 2019 in Nashville (the room was too packed to get into), I managed to see it at BBCon 2019 London. Now husband and wife team (sorry if I spoiled the worst kept secret outside of Blackbaud) are back. Lisa and Jarod know NXT inside out and have found ways of working with it that overcomes any drawbacks that you might find. I felt that this session was worth it even though I am not working with NXT on a day to day basis. You can be sure that they will have some new tips and tricks up their sleeves as well as showing the best ones from last year.

Mastering Security in Blackbaud Raiser’s Edge NXT® – Bill Connors

No session list would be complete without Bill Connors. Bill wrote the book on Raiser’s Edge (literally). It will be great to see a session looking at NXT security. While the database view is somewhat of a known entity, the new NXT modules are, for me at least, somewhat of a mystery. How you link that functionality with good organisation policies will be essential viewing for anybody managing a database.

Nothing takes your fancy? Well here are some of our favourites from past years:

Troubleshooting the creation of a SKYUX Addin on Windows

I just came back from BBCon in Nashville all fired up ready to create a SKY UX Addin. For those of you new to this, this can be a tile in a SKY based application i.e. Raiser’s Edge NXT (and other components are to come in the future).

I have tried this in the past without much luck. There were issues with the sky ux cli not working on my Windows machine (I am told the dev team use mainly Macs). Fast forward a few years and I thought that I would give it a go.

Before I start, I just wanted to say that the documentation that Blackbaud have produced for all aspects of the SKY developer platform is really very good. It is so much better than anything that they have ever produced in the past. I wanted to document this in case anybody should ever run into the same difficulty. It may be that you will never have this problem (especially if you, unlike me, actually read the prerequisites before starting!)

Configuration

I am primarily following the instructions here

However, before I can even start with that I needed to install the SKY UX sdk. This is done with the following:

npm install -g @blackbaud/skyux-cli

npm install -g @skyux-sdk/cli (thanks Ben Lambert for setting me straight!)

I then followed the instructions in the main article. I fired up Visual Studio Code, started a terminal window and went into my project directory.

skyux new -t addin

This was where the process failed for the first time. I got the error:

 addin template successfully cloned.
Setting @blackbaud/skyux version 2.54.1
Setting @blackbaud/skyux-builder version 1.36.0
× Running npm install (can take several minutes)
npm install failed.

Following this I ran a verbose version of this

skyux new -t addin --logLevel verbose

This gave me much more information. It told me that python was not installed on my system so it could not work.

I ran the following to install python. However this was not all plain sailing either… I ran this from an command prompt run as Administrator

npm install --global --production windows-build-tools

This first time this ran it told me that Python was installed successfully but then it just sat there for a while. Task manager told me that the command prompt was doing something but nothing happened on the screen. I waited for at least half an hour. The command prompt no longer appear to be working hard (according to task manager) so I broke out of the process (CTRL+C). I ran the same process again and it worked very quickly returning me back to the command prompt.

I then ran the skyux new command again and everything appeared to install.

Serving up the application

The next step according to the instructions was to “serve” the application. There is one, probably obvious step, that is missing from the instructions. In Visual Studio Code you need to open the folder where your project has been install. When you first open VSC assuming that no previous workspace or folder has been opened (I had closed mine) all you have is the welcome page. Click on the open folder button to show your app in the folder structure.

You can then use serve the app:

skyux serve -l local

Introducing Chimpegration Cloud for Raiser’s Edge NXT

When NXT was announced at BBCon 2014 there was some initial confusion as to what it would mean for Raiser’s Edge users. I have to be honest, at the time I was underwhelmed. I could see the logic of moving gradually to the cloud but it wasn’t exciting. As the years have gone by and NXT has matured and developed I have begun to see the benefits.

Of course DBAs still spend a lot of time in the traditional database view and there is a lot of impatience for NXT to catch up so that the web view goodness is delivered to those who are heavily involved with Raiser’s Edge.

As developers we have had high hopes for the web based API. All of a sudden we could break out of the plugins area and develop Chimpegration’s potential.

Chimpegration Cloud follows a similar pattern to Raiser’s Edge NXT. Some features are found in the database view plugin and also found in the web Cloud version. However where Chimpegration Cloud shines is with the new possibilities available.

Up until now it has only been possible to schedule processes with on-premise installations of Chimpegration. As standard, it is now possible to collect bounces, unsubscribes, opens, clicks and more on a regular basis. Set it up and let it run. Have it run at night so that the results are ready for when you arrive in the office in the morning. Or sit with your coffee and be mesmerised as the number of processed records increases.

Q. What could be better than scheduling a process?
(That was a rhetorical question I wasn’t expecting anybody to put their hand up)

A. Real-time processing. What does that mean? Simple. You set up an action so that as soon as a constituent subscribes, bounces or unsubscribes it feeds directly into RE!

This is just the beginning. Chimpegration Cloud is waiting for the SKY API to catch up! We want to offer exports (the functionality is in place for Altru and BBCRM so come on SKY API team give us data export!) and we are working on import.

We are the masters of our own destiny servers. This means that if we see that processing is going slowly we can ramp up the power. You no longer have to share your Chimpegration processing power with others.

If you are interested (and who wouldn’t be if you have managed to read this far) then why not take a two week trial or get in touch to ask us a question.

Introducing Chimpegration Cloud for Altru

Do you use Mailchimp? Or even if you use another email marketing tool what is your process?

After writing and rewriting the text, designing and redesigning the layout, adding graphics, seeking feedback and finally sending out one of the greatest appeals you have ever done, you wait for the responses.

Of course the best type of response is the donations that come flooding in but the other kind, the ones that count towards determining your constituent segments and who to target in the future, also arrive. Every bounce, unsubscribe, open and click helps you to work out who is going to become that next major donor and who does not want to hear from you again (maybe your choice of dancing cats in the appeal email should be revisited the next time around!)

You pull up a report in Mailchimp and painstakingly update Altru with those unsubscribes and bounces; do you really have time to add the numerous clicks too?

With Chimpegration Cloud, you set up your process to retrieve different results. Track your opens and clicks by adding interactions. Mark your constituents as ‘do not contact’ or add an attribute if they unsubscribe. Set the email to an invalid email address for bounces or go crazy and do any combination of the above.

You can also export your records to Mailchimp to start with. Set up a query and push the constituents out to Mailchimp.

You can do all of this as a one off or you can set up a schedule so that it runs at night with everything ready for you when you walk through the office door the next day.

Q. What could be better than scheduling a process?

(That was a rhetorical question I wasn’t expecting anybody to put their hand up.)

A. Real-time processing. What does that mean? Simple. You set up an action so that as soon as a constituent subscribes, bounces or unsubscribes it feeds directly into Altru!

This is just the beginning. We are constantly updating the application and adding new features. Look out for import soon.

If you are interested (and who wouldn’t be if you have managed to read this far) then why not take a two week trial or get in touch to ask us a question.

Introducing Chimpegration Cloud for BBCRM

Do you use Mailchimp? Or even if you use another email marketing tool what is your process?

After writing and rewriting the text, designing and redesigning the layout, adding graphics, seeking feedback and finally sending out one of the greatest appeals you have ever done, you wait for the responses.

Of course the best type of response is the donations that come flooding in but the other kind, the ones that count towards determining your constituent segments and who to target in the future, also arrive. Every bounce, unsubscribe, open and click helps you to work out who is going to become that next major donor and who does not want to hear from you again (maybe your choice of dancing cats in the appeal email should be revisited the next time around!)

You pull up a report in Mailchimp and painstakingly update BBCRM with those unsubscribes and bounces; do you really have time to add the numerous clicks too?

With Chimpegration Cloud, you set up your process to retrieve different results. Track your opens and clicks by adding interactions. Mark your constituents as ‘do not contact’ or add an attribute if they unsubscribe. Set the email to an invalid email address for bounces or go crazy and do any combination of the above.

You can also export your records to Mailchimp to start with. Set up a query and push the constituents out to Mailchimp.

You can do all of this as a one off or you can set up a schedule so that it runs at night with everything ready for you when you walk through the office door the next day.

Q. What could be better than scheduling a process?

(That was a rhetorical question I wasn’t expecting anybody to put their hand up.)

A. Real-time processing. What does that mean? Simple. You set up an action so that as soon as a constituent subscribes, bounces or unsubscribes it feeds directly into BBCRM!

This is just the beginning. We are constantly updating the application and adding new features. Look out for import soon.

If you are interested (and who wouldn’t be if you have managed to read this far) then why not take a two week trial or get in touch to ask us a question.

Name Splitting in Importacular

Every so often we get a support question from a user asking us how they can import data like the following that appears in one Excel column:

“Dr David A Zeidman PhD”

We have invariably told them that this is very difficult to manage and that they would have to manually break up the one column into the 5 separate components (title, first name, middle name, last name and suffix) so that they could map them.

With Importacular 3.5 (available now for self-hosted organizations and coming within an indeterminate period of time for Blackbaud hosted users) you are able import combined fields like this.

The new constituent area settings allows you to split one field on your incoming file or data source into parts. The logic takes into consideration common titles, first name and last name (taken from US survey data) as well as suffixes. It also handles multi-word last names e.g. Von Trap or De La Fuente.

What is the best part of this? There is absolutely no extra cost to use this feature. It is included as standard irrespective of whether you have purchased any other data sources.

Download the latest version of Importacular now!