Mobile Storage Management
During the spring 2008 semester, a team of NCSU students developed a prototype web interface to allow management of an EMC Celerra storage array using an Apple iTouch. The prototype was so successful that we desire to continue research on using handhelds as a viable, secure management interface.
The goal of this semester’s project is twofold:
- Enhance the application to work on multiple handheld platforms as browser based code
- Package the application into a form allowing controlled distribution for customer interaction via usability studies.
EMC will provide the previous semester’s project to use as reference/starting code. We desire this student team to:
- Learn and understand how the previous semester’s project interacts with EMC’s Celerra manager interface.
- Using additional handheld Wi-Fi enabled platforms provided by EMC evaluate and modify the browser based application as necessary allowing it to work reliably and securely on multiple platforms
- Verify the application can make secure connections to the Celerra across the multiple platforms.
- Modify the application to support controlled distribution to customers targeted for participation in usability studies.
- Implement a capability allowing the handheld application to receive real time alerts
Benefits to NC State Students
This project provides an opportunity to attack a real life problem covering the full engineering spectrum from requirements gathering, to research, to design, and finally prototype implementation. This project will provide ample opportunity for creativity and innovation. EMC will work with the team closely to provide guidance and give customer feedback as necessary to maintain project scope and size. The project will give team members an exposure to commercial software development.
Benefits to EMC
As storage become more prevalent in enterprise and mid-size companies the ability to provide management information via lightweight, mobile devices becomes more important. This project gives EMC an opportunity to understand how mobile management could and should be implemented from the mindset of future storage professionals.
EMC Corporation is the world's leading developer and provider of information infrastructure technology and solutions. We help organizations of every size around the world keep their most essential digital information protected, secure, and continuously available.
We are among the 10 most valuable IT product companies in the world. We are driven to perform, to partner, to execute. We go about our jobs with a passion for delivering results that exceed our customers' expectations for quality, service, innovation, and interaction. We pride ourselves on doing what's right and on putting our customers' best interests first. We lead change and change to lead. We are devoted to advancing our people, customers, industry, and community. We say what we mean and do what we say. We are EMC, where information lives.
We help enterprises of all sizes manage their growing volumes of information—from creation to disposal—according to its changing value to the business through information lifecycle management (ILM) strategies. We combine our best-of-breed platforms, software, and services into high-value, low-risk information infrastructure solutions that help organizations maximize the value of their information assets, improve service levels, lower costs, react quickly to change, achieve compliance with regulations, protect information from loss and unauthorized access, and manage and automate more of their overall infrastructure. These solutions integrate networked storage technologies, storage systems, software, and services.
EMC's mission is to help organizations of all sizes get the most value from their information and their relationships with our company.
The Research Triangle Park Software Design Center is an EMC software design center. We develop world-class software that is used in our NAS, SAN, and storage management products.
EMC where information lives.
Back to the top...
- Fidelity Investments
Online Game to Promote Investment Expertise for Gen-Y
Fidelity Web Technology is responsible for the Website experience for customers who may be retail or 401k investors. Many of our newest investors are in the Gen-Y demographic, and are making the first serious investments in their life. Although there is a wealth of material available on the Internet about all aspects of investing, much of it is dry and uninteresting. More importantly, most beginning investors don’t know where to start.
Online gaming has the promise to be able to bring together many aspects of financial education in a way that is both interesting and relevant
Design an online experience in a gaming or virtual reality environment that incorporates elements such as:
- Basic Investment Concepts such as asset allocation, specific security types such as stocks, bonds, and money market
- Life planning
- Historical analysis
Technology environments could be any of the following:
- Flash-based web game
- Second Life or other virtual world environment
- Console gaming such as the Wii or Playstation
A Durham-based team including designers and financial analysts are available as resources for investment information and other questions.
Back to the top...
- Duke Energy
An Enterprise Service Bus
With the advent of enterprise interoperability among computer systems, there is an increased need to manage interfaces and data between these applications in such a way to minimize modifications to receiving applications when data schemas change. This presents challenges in hardware and software technologies that can be solved by leveraging an Enterprise Service Bus (ESB). Students will prototype an Enterprise Service Bus to showcase the interoperability between applications and demonstrate scalability, reliability, and performance. An ESB is an abstraction layer above messaging infrastructure (possibly multiple types) that is used for connecting applications and transforming/adapting messages to various interfaces. The implementation of the ESB architecture will demonstrate messages being put on the bus and receiving applications listening for messages and processing the data. For this prototype, the simulation of alerts from meters will put messages on the bus and receiving applications will contact customers/employees through various devices or mediums (Cell Phone, PDA, E-Mail, Paging, or other devices). Though this prototype, students will gain experience developing SOA services that connect to an ESB for simplified processing of data.
Back to the top...
- Progress Energy
Re-Design of Enterprise Safety Tests & Waste Management Applications
Progress Energy, headquartered in Raleigh, N.C., is a Fortune 250 energy company with more than 21,000 megawatts of generation capacity and $10 billion in annual revenues. The company includes two major utilities that serve more than 3.1 million customers in the Carolinas and Florida. Progress Energy is the 2006 recipient of the Edison Electric Institute's Edison Award, the industry's highest honor, in recognition of its operational excellence. The company also is the first utility to receive the prestigious J.D. Power and Associates Founder's Award for customer service. Progress Energy serves two fast-growing areas of the country, and the company is pursuing a balanced approach to meeting the future energy needs of the region. That balance includes increased energy efficiency programs, investments in renewable energy technologies and a state-of-the-art electricity system. For more information about Progress Energy, visit the company's Web site at www.progress-energy.com. Background for Project
Re-Design of the Safety Test Lab Application from Access to C# ASP.Net.
This application is used by Progress Energy Florida and Carolina. The application purpose/function is to test Rubber gloves and blankets that are used by linemen to protect them. The gloves and blankets come through the lab at Garner to be washed on a schedule. They are tested (at 20,000 volts) and dated. Tests will indicate if there are any paths for electricity. The goods then go back out for use.
Re-Design of the Waste management Application from Access to C# ASP.Net.
This application is used by Progress Energy Florida and Carolina. The application purpose/function is to test waste and determine the waste disposal process. Waste from T&D customers - small waste (dirt) or oil from transformers comes through the Transformer Shop (lights, batteries, paint, some oil is sold, some oil is recycled at Garner, and some is burned at Cape Fear Plant). The determination of how to handle the oil is the analysis that is done when oil is received.
Benefits to NC State students
This project provides an opportunity to develop C# Asp.Net application and oracle database requirements gathering, design, development and testing skills.
Benefits to Progress Energy
Provides a centralized view of Safety rubber goods condition and cleaning schedule for both Florida and the Carolinas'. Eliminates the need for desktop installation of this application and reduces the number of technologies Progress Energy developers support.
Back to the top...
Measuring, Analyzing, and Comparing High-volume Data Transfer and Graphical Rendering in Adobe Flash
This project involves research into, and development of, a system for in-depth analysis of various graphical modes and data transfer protocols for current production and upcoming beta versions of the Adobe Flash player. For example, the Adobe Flash Player version 9 had three different modes for delivering graphics to the screen - normal, transparent, and opaque. The beta version of the Flash 10 player has 2 additional methods - direct and via an on-board Graphical Processing Unit (GPU). The system should identify several pros and cons for each of the rendering techniques, including a comparison of the Adobe players across the supported platforms (Windows, Mac OSX, and Linux). In addition to the analysis of rendering performance, this project will also involve gathering performance numbers on various protocols for data transfer in a distributed computing environment. Beyond just research and resulting recommendations, the end product of this project should be include an interactive Adobe Flash application which allows subsequent test runs to be completed and measured. The user of this application should have control over submitting various sizes of data and comparing the performance between the various data transfer protocols and graphical rendering modes between various versions of the Adobe Flash player.
Back to the top...
I/O Trace Analysis
NetApp has stored over 100GB of CIFS (Common Internet File System) and NFS (Network File System) trace data, collected over a period of several months, from real customers. This project will involve creating tools and methods for analyzing this data. Some data is in HP's DataSeries format, which has existing library code to parse actual I/O packets. Other data is in a parseable text based format.
Part 1: Write C/C++ code that will transform the non-DataSeries data into DataSeries data. This will provide familiarity with an emerging standard for trace data.
Part 2: Analyze the existing data with simple methods, producing workload characterization that included graphs showing I/O sizes and rates, diurnal variation, averages and peaks, etc.
Part 3: Analyze the existing data with sophisticated methods, including the detection of individual per-application data streams within the (apparently random) aggregate I/O trace. Analysis will include the automatic detection of individual streams of I/O requests within the apparently random aggregate of I/O transactions. An "oracle" will be implemented that will perfectly predict read-ahead for the dataset (i.e., using knowledge from the future).
Part 4: Implement, in C/C++, code that will provide a framework for using the trace data to evaluate various read-ahead algorithms, comparing the results with the "oracle" computed in Part 3.
To complete all 4 parts requires an aggressive, well-managed software engineering schedule. To make this easier, we can leave off Part 1 and concentrate only on the data set that is already in DataSeries format. Completing only Part 2 & 3 may be sufficient for a journal paper, although having Part 4 in place would make the paper better.
This project should attract a team with strong C/C++ programming skills; a strong interest in the problems of manipulating huge amounts of data; and an interest in applying algorithms to operating system design. Familiarity with programming in the Linux environment using Open Source tools is required. (If the team is interested in statistical pattern recognition or other advanced algorithmic methods, we can tailor Parts 3 and 4 to those interests.) All the code produced by this project is expected to be used at NetApp for several years, and there is a possibility of follow-on projects that build on this work.
Back to the top...
- Fujitsu America, Inc.
GlobalSTORE Help Menu System
The team will create a new help menu system for the GlobalSTORE application. The current help menu system for GlobalSTORE falls short due to both usability and technology demands. It fails usability because the help menu displays in a window that covers up the actual application causing the operator to be unable to see the object he or she is getting help on. Also, future versions of GlobalSTORE will no longer require IIS to be installed due to tightening restrictions of the Payment Card Industry. Since the current help menu system requires IIS, an update must occur.
The help menu system consists of multiple components. A UI component will display content to an operator without obstructing the view of the application. A storage component will provide a system for customers to easily store and update help content. A possible transport layer may need to be created in order to move data from the storage system to the UI if a currently implemented mechanism cannot be used. Additionally, a tool for maintaining and updating the help content should be created.
There will be several product and technologies used in this project. The goal is not for the students to become experts in any of these technologies but rather to understand that such classes of products or technologies exist and in which stages of development their use is appropriate. Although much of what we use at Fujitsu is Microsoft based, the purpose is to understand when to use particular classes of products or technologies, not only these specific ones. Examples of the products and technologies that will potentially be used in this project include:
- Microsoft Windows XP / 2000 (Operating System)
- Microsoft VB.Net(Development Language/Environment)
- Microsoft Developer Network(MSDN) Library
- Microsoft SQL Server 2005 (Database)
- Microsoft Visual Studio 2008 (IDE)
- Microsoft Word (Word processor - documentation)
- Microsoft PowerPoint (Presentations)
Back to the top...
Broadcast Television Video Server
Crispin seeks to develop a video server that is capable of recording and playing back digital audio/video in high quality broadcast format. Hardware and a software API will be provided that is capable of controlling the video and audio functions. The part that remains is the integration of this third party hardware and software into Crispin's own control software and could include some GUI development, too. You'll need to code in MS Visual C++ but no previous experience with video or broadcasting is necessary.
A video server is simply a computer that plays digital audio and video files (such as a MPEG or DV file) through a decoder card that can be connected to a viewing or other video recording or transmission device. The key is that it plays and/or records high-resolution, broadcast quality video and audio (NTSC standard of 525 lines at 30 frames per second) and the ability to perform this on multiple video channels or "ports" simultaneously.
The server itself has already been built here at Crispin and we could bring this to the lab for the student team to work on directly. It's a 3RU rack-mounted PC running Windows Server 2003. There's a video card included in the server that we purchased from Matrox. The card is called the Matrox DSX.sd multi-channel SD (Standard Definition) analog & digital I/O card (http://matrox.com/video/en/products/developer/dsx/#dsxsd). Matrox also provided software and code that can be incorporated into one's own software. The documentation is decent and there are test programs with source code that are already compiled that show and test the basic capabilities of the card.
There are a few additional small pieces of equipment that need to be added, which we will provide: the cables, an Audio/Digital converter, and a 3 x LCD viewing monitor. In short, with some additional setup tasks and the connection of these components, the server should already be capable of playing video clips using the test software.
The overall project is to confirm that the current video card and software work (through some help from Crispin, reading documentation and trial-and-error). The second phase is to then integrate the capabilities into a Crispin interface driver. This driver is a Windows DLL, for which we have many templates and examples that show how to integrate the record and playback capabilities of this server with our Crispin software.
Crispin Corporation is a leading worldwide provider in television broadcast automation with hundreds of customers throughout North America.
Back to the top...
- TransLoc, Inc.
Real-Time Mashup and API
A few years ago a band of comp-sci kids graduated from NCSU and struck out on their own. The result was a way for students on campus to track the Wolfline buses in real-time, on the web, and on mobile phones. Now that technology is being used at over half a dozen schools.
Now it's your chance to develop this technology further! Impact students every day by creating a Web API for the Transit Visualization System. Expose useful real-time information about hundreds of buses to the world. Enable developers like you to build and share applications that extend the benefits of the TVS.
This project consists of two parts.
First, design and implement a robust and easy to use API that is capable of supporting hundreds of simultaneous users. This will involve:
- determining what data to provide
- choosing a delivery protocol (XML/JSON, SOAP/REST, etc.)
- building and documenting the server-side components
- including access and abuse handling mechanisms
Technologies for the back end are flexible, but PHP, Python and MySql under Linux are preferred.
Back to the top...
- Thomson Reuters – Healthcare
FAST REBUILDING OF EPISODES OF CARE
Implement a prototype that builds so-called episodes of care in considerably less time than the standard episodes builder that is part of the Medstat Advantage Suite product. Test correctness and performance of the prototype.
Student Insights Gained
Students will learn the following: working in a product development environment, working on a data warehouse related product, interacting with subject matter experts, devising and proving the validity of the chosen heuristic approach, implementing a prototype, testing the correctness and performance of the prototype, presenting findings to company representatives.
Student Skills and Experience
Required: Database programming, basic knowledge of SQL-92, ability to work with subject matter experts.
Optional: Unix scripting language (Python, or one of the shell script languages).
Development and Test Environment
Students will have access to a Unix server running the required modules of Advantage Suite, and a database server running the Teradata RDBMS. All data will be de-identified to satisfy HIPAA regulations.
Dr. Uwe Pleban, Vice President for Research & Development, will be the company mentor for this project.
Medstat Advantage Suite and Advantage Build
Medstat Advantage Suite is the flagship product in the Payer Management Decision Support sector of the Healthcare business of Thomson Reuters. It consists of a suite of applications for healthcare analysts, and a component called Advantage Build, which is used to construct and regularly update a data repository that the applications query. Advantage Suite users can perform analyses of quality of care, access to care, and cost of care, conduct disease management and case management, initiate fraud, waste and abuse investigations, and many more.
Data Repository and Star Schema
The data repository is also known as an analytical data mart. It contains multiple years (usually three or more) of detailed administrative claims data (facilities claims, professional claims, prescription drug claims, laboratory claims, etc.) together with analytical aggregates, including facilities admissions (or stays), and episodes of care. Admissions and episodes are constructed when the data repository is first built, and then are re-built every time the database is updated with new claims, usually once a month.
The database uses a star schema data model with some extensions. Detail claims, claim aggregates, and eligibility data are stored in fact tables. Each aggregate table links to its constituents via a so-called associate or bridge table. For example, each admission has a unique admission ID, and the associative table maps each such ID to the internal claim IDs of those claims that make up the admission. Since an episode may consist both of admissions and detail claims, two such associative tables are kept for episodes. Aside from fact and associative tables, the data model contains a large number of dimension tables, upward of 25. They include tables for diagnosis codes, procedure codes, providers (hospitals, doctors, pharmacies), persons, (patients recipients, beneficiaries, covered lives), time period information, etc.
Episodes of Care
Episodes of care are constructed by invoking the Medical Episodes Grouper (MEG) that is part of Advantage Build. Episodes are classified as acute or chronic. For example, if a patient breaks a hip, all the resulting claims for hospital admission, surgery, doctor’s visits, X-rays, prescription drugs, use of a wheelchair, physical therapy, etc. connected to that medical event will be grouped into an acute episode. Conversely, for a patient with diabetes, a chronic episode is constructed for each calendar year during which diabetes related claims are incurred. Multiple episodes can occur concurrently, since a person with chronic asthma may have an acute flare-up, suffer from the flu, and break a limb at the same time. On average, episodes consist of about 25 claims, but episodes with more than a thousand constituent claims have been observed.
Episodes add great value to the analytical detail data, since a healthcare analyst can start at the episode level and then drill down to the detail claims and admissions that are part of the episode. Another interesting aspect concerns so-called ungroupable claims, i.e., claims which are not part of any episode. Certain types of ungroupable claims may be an indication of fraud, such as free standing claims for MRI scans.
The MEG episodes grouper reads all claims sorted by incurred date, and tries to group them into distinct episodes. Each episode belongs to one of more than 460 episode categories. The exact rules for starting, extending, and closing an episode are very complex, and are not of interest here. After construction, the episodes information (episodes table and associative tables) is written back to the database.
MEG is a complex set of C and C++ modules that execute on the database build server(s). It requires its inputs to be represented as flat files, and produces flat files as output as well. It does not execute inside the database, so a large amount of time is spent moving input data from the database to flat files, and moving the results from flat files back to the database.
Cost of Episode Construction
Episode construction is expensive. While this is usually not a problem when the database is first built (which may take several weeks anyway), the monthly episode update process may take so much time that it may be challenging to meet service level agreements (SLAs) regarding the total elapsed time taken for updating the data repository.
Consider the following situation during a monthly database update for June 2008:
- The database contains 48 months of data. When a new month of data is added, the first month of data will be made inaccessible, so that we have a sliding window of 48 months.
- A person has new claims for June 2008. These may include claims incurred in June, or May, or an earlier month. This is due to claims lag, i.e., the time taken to process a claim and pay it – one of the reasons for the inefficiency of the U.S. healthcare system. The claim may also be a void claim or an adjustment to a previous claim. It is even possible for claims that are more than a year old to appear, due to a change in eligibility for services. This situation usually happens in the Medicaid and Medicare sector.
- For each person with new claims, episodes need to be re-constructed. This is required since the occurrence of new claims will change the behavior of the grouper, and result in the creation of a new episode, the de-construction of an existing episode, or the extension of a given episode at the beginning or the end.
The key question is this: how much claims history needs to be read and combined with the new claims in order to correctly update the episodes for a person?
The conservative answer in the scenario above would be to read all of the previous 47 months of claims data. Clearly, a lot of unnecessary data is read, since it is almost a certainty that episodes that lie far back in the past are not affected by recently incurred claims. To address this issue, Advantage Build has a feature that restricts the reading of episodes to a given number of years. Analysts recommend that the time window be at least two years, and preferably three.
Empirical measurements have shown that commonly only 15% of the episodes are affected by the new claims. The other 85% are needlessly re-computed.
Detailed Problem Statement
This leads us to the problem statement:
Devise a conservative heuristic that limits the number of claims read from the database for re-constructing episodes for a particular person with new claims. The heuristic must be conservative in the sense that (a) it always captures all the claims necessary for building episodes affected by new claims, and (b) it may read claims that are grouped into the same episode as in the previous month’s run. Prove that the heuristic is conservative. Determine what additional information needs to be deposited in the database as part of the episodes update in order to drive the heuristic. Implement a prototype that combines the heuristic with MEG. Test the correctness of the prototype, and compare its performance with the standard approach to episodes re-construction.
An initial heuristic may be refined based on new insights gained from discussions with subject matter experts in episodes construction.
Let us consider an example involving a very large data warehouse. The Centers for Medicare and Medicaid Services (CMS) are currently constructing a claims data repository for more than 45 million Medicare beneficiaries. About 200 million detail claims are incurred every month, so 36 months of data will exceed 7 billion claims requiring more than 10 TB of RDBMS storage. Each month, about two thirds of the beneficiaries have new claims. If episodes were to be constructed each month, and the “episode window” build parameter were set to 3 years (36 months), then
66% × 7.2 billion = 4.75 billion
claims will need to be read from the database using the conservative approach to episode construction. Using two powerful Sun v890 build servers with 8 dual-core CPUs and 64 GB of RAM each, the current estimate for re-constructing episodes amounts to more than 56 hours of uninterrupted computing time for each server. If the window is set to 24 months, the computing time is estimated to still exceed 37 hours per server.
Back to the top...
Weather Forecast Archival and Verification System
The WRAL Weather Center produces multiple complete sets of forecasts for our area every day. Currently, those forecasts are not being archived in any formal way, nor are meteorologists able to easily evaluate forecast accuracy or performance.
This archive and verification system will automatically collect forecast and observed weather information from various sources and generate single-forecast and comparative reports on skill, bias, error, and other performance metrics.
The scope of the project can be divided into three general spheres:
- Collection – The system will automatically harvest forecast and observed weather data in various formats (fixed-width columnar text, XML, etc.) from multiple local and network sources.
- Storage – The system will store these data in a relational database.
- Analysis – Users will be able to run reports against the forecast data, including but not limited to raw and root-mean-square error, forecast, and bias, as well as comparisons between forecasts from multiple sources.
The system should provide a web-based interface for end users, prevent unauthorized users from editing the database, and generate alerts in the event an automatic database update fails.
Wherever applicable, the database should dynamically drive options presented to the user. For example, if a new source of forecast data is added to the database, it should automatically be available as an option for applicable reports. Likewise, adding new sources of data or new reports should not require reconfiguring the database structure.
Detailed technical requirements will be provided; however, development should be centered around a Linux platform, using MySQL for the database and Apache for web services. PHP is the development language of choice.
Back to the top...
- First Citizens Bank
Prototype Metadata Repository for First Citizens Bank’s Data Warehouse
First Citizens has an enterprise-wide data warehouse that provides reporting and analysis services throughout the Bancshares environment. With over 800 users, a number of key reporting/analysis applications have been developed to support key processes in Loan Portfolio Review, Credit Relationship, and Relationship Management among others.
As we look to leverage the organization’s investment in the data warehouse to other areas, readily accessible metadata or information about the details of the data in the warehouse is needed. This information will allow end users throughout the organization to better understand what data is sourced in the warehouse, what business rules have been applied and what reporting frameworks are used to access the data.
This project has two objectives. The first is to develop a working prototype of a metadata repository that is accessible to FCB employees via a web portal on our intranet. Major components of this objective are:
- Document business and technical requirements for the project.
- Develop programs to strip metadata from source loader files
- Customize the metadata data model for FCB use
- Develop automated programs to populate the information to the metadata repository
- Design and implement the user interface for query and update of selected fields
The second objective of this project is to develop a method to evaluate the source data that is being loaded into the warehouse on a monthly basis. This evaluation would encompass:
- Developing reasonability checks (i.e. validation of file record counts based on historical load)
- Creating an automated method to validate the source data loaders against what was actually loaded in the database.
- Develop alerting methods to give early warning of abnormalities in the data load process
- Populate the metadata repository with this technical information.
Back to the top...
Employment Connections for Generation-Y
YoungAndTalented.com is a social networking, web-based application designed to connect college students, employers, and college career centers, in order to facilitate employment opportunities for young adults. Each type of profile will be customized to fit the needs of each type of user. In addition to a social network, the application will also be a hybrid of a ‘traditional’ job board and include a variety of features concerned with facilitating employment. While the network will function very similarly to facebook or myspace from a student’s perspective, career center and employer users will be able to interact with the network in unique ways, to post jobs, search students, disseminate information, etc.
The goal is for YoungAndTalented.com to be a dynamic, enterprise-level application that’s considered a cutting edge Web 2.0 portal. YoungAndTalented.com intends to target the nation’s premier corporations and all of the major hiring entities of recent college graduates, and the portal must be satisfactory to this audience.
The proposed system is a new, self-contained product that should resemble facebook from a usability and functionality perspective. Functionally, the system is a variation of a facebook clone. The core concept revolves around three distinct types of profiles- users, employers, career centers- and how they interact with one another.
User Profile: The most common type of profile will be for that of the student/ job-seeker. This profile functions very much like a facebook profile, except the nature of the information is all intended to be academic or professional in nature. A user profile includes:
- Ability to upload photos, graphics, videos, etc, into their ‘portfolio’
- Recorded ‘video introduction’ for a visitor/recruiter to view
- A free form text welcome statement
- Upload word docs, i.e. letters of recommendations or resumes
- Import contacts from other social networks
- Create a badge
- Add ‘friends’
- Track job openings and employers’ profiles
- Write a blog
- Apply for employment opportunities
- Interact with other users’ profiles
Employer Profile: We generally consider a major recruiter of college graduates to be the prototypical sort of hiring entity that will join YoungAndTalented.com, however an Employer Profile could be for an employer of any size. The features of this profile will be diverse enough to accommodate any size hiring entity. Employer profiles include:
- May create their page to take on their organization’s exact corporate identity, adjusting the color scheme, graphics, media, layout, etc.
- Ability to search the entire network in unique ways, such as by GPA and other specific career-related information found on resumes.
- Ability to create a ‘contest or challenge’ that they pose to the talent pool.
- Ability to post unlimited jobs
- Upload promotional media
- Create discussion boards, blogs FAQs
- Instant message users
- Provide links and press releases
- View and join career fairs and virtual career fairs.
- One initial creator/profile admin, with ability to create accounts for other corporate users and grant access to the company’s account
- Manage employment opportunities
Career Center Profile: This profile will rely on the least amount of actual engagement from a user, and could be managed by a passive user and still function properly. When a student visits their school’s career center profile, they may view the school’s specific job board and ‘challenges’ board, where a student may see what openings and contests or challenges are specifically being targeted at their school’s students.
- Act as a clearinghouse for openings/challenges targeted at their university’s students
- Create blogs and discussion boards
- Use a ‘newsletter’ or calendar widget to disseminate information and keep students up to date on events
- Create a contest or challenge
- Upload promotional media
- Grant administrative privileges to career center staff in order to provide counseling, edit blogs, moderate discussion boards.
- Create a ‘virtual career fair’
From a scalability perspective, the regular users (students/candidates) will be the focus of concern. For every one school, we expect thousands of student users, and a few hundred unique employers. Our aim is to attract hundreds of thousands of student users in the first year, with a rough goal of approximately 10,000 student users within the first month of release at our flagship university. The system must be 100% scalable.
Back to the top...
The Social Workplace
Background for Project
Today there are a lot of social networking/applications (e.g. Facebook, MySpace, Twitter, etc.) whose purpose is to keep friends current with each other. Maybe the simplest being Twitter whose tag line is "What are you doing?", and its 140 characters available to answer the question. Recognizing the workplace is also a social place, we would like to experiment with applying social networking/applications to the workplace. The tag line would be: What are you working on?
Most social networking has several constructs e.g. (Feeds, Friends, Friends you may know, Networks, Photos, Walls, Notifications, etc.). Many of the constructs have direct workplace correlations (News, Coworkers (with in your project), Coworkers working on similar projects, Other Offices Locations, PDFs of Original Docs, Public Posts, Notifications, etc.).
Historically a lot of emphasis has been placed on document version control. That is, ensuring that only one person may edit a given file at a time and that the most recent version of all files are available. However, there has been little attempt to keep users informed of the activity around documents they are working on or documents they may otherwise be interested in. This project will combine the power of an Enterprise Server Application (e.g. ERP, PLM, DRM, CRM, etc) with a user friendly Facebook style interface to provide the user with a full featured experience.
This project and its successors will look at implementing social networking into the workplace. The premise will be to use existing enterprise server technology to implement this functionality. The rationale is that the workplace is extremely sensitive to owing all of the information associated with its network, intellectual property and employee privacy. However the user friendliness, natural language expressiveness, and general aesthetics of today's social networking/applications are critical to success.
- Event detection – I-Cubed will host an Enterprise Server Application and a SOAP service interface to provide access to the needed API calls. Additional access can be provided as needed.
- UI design and implementation. The UI should keep users informed of pertinent document events, similar to a Facebook news feed. For example, an event may be “Grant has created a new version of document1.pdf.” or "Alan has just read doc1.pdf "or "Donald edited doc1.pdf for 31 minutes and made 12 changes".
- Stretch Goals might include introducing other constructs like
- Other Documents you may be interested in following
- Other Co-Workers who have worked on documents you are the author
- Several documents are being worked on with the a similar Name or Tags
- This project will interface with an Enterprise Server Application via SOAP API provided by I-Cubed.
- Client to server communication will be enabled by SOAP (Simple Object Access Protocol). SOAP is a protocol used for exchanged XML based messages.
- The UI should be developed on a framework that is flexible and will allow for the creation of a rich and compelling UI. The application should be portable to various operating systems.
Since 1984, Integrated Industrial Information, Inc. (I-Cubed) has been at the forefront of Computer Aided Design/Drafting (CAD) development solutions systems. They have created a suite of powerful integrations that allow different CAD systems to communicate more directly with product development life-cycle tools, such as Windchill PDMLink created by Parametric Technology Corp. (PTC), one of I-Cubed’s partners. I-Cubed has been sponsoring senior design projects for more than 8 years. Several individuals have been hired by I-Cubed as a result of CS 492 projects and some of the research of previously sponsored projects has been developed into fully professional products, now sold by Adobe. I-Cubed’s office is conveniently located on NC State’s Centennial campus in Venture III.
Back to the top...
- Northrop Grumman
Avoiding Ad-Hoc Route Expiration
Northrop Grumman Corporation is a $30 billion global defense and technology company whose 120,000 employees provide innovative systems, products, and solutions in information and services, electronics, aerospace and shipbuilding to government and commercial customers worldwide. The Electronic Systems segment is headquartered near Baltimore, Maryland and develops high performance sensors, intelligence, processing, and navigation systems operating in all environments from undersea to outer space.
We would like to develop a method by which route expiration in Ad-Hoc On-Demand routing algorithms can be avoided and by which the true ‘shortest path’ is determined.
Ad-Hoc routing algorithms are used to determine routes through networks without any intervention by the user. On-Demand variants wait to discover a route until one is needed. Once a route is established it can be used freely by any application. If a route is unused for a period of time that route will expire. If the destination needs to be contacted again a new route will need to be discovered. While the benefits of ad-hoc route discovery are good, the route discovery process is a negative as it can generate a lot of traffic and, depending on the link layer, take quite a bit of time.
Many ad-hoc routing algorithms are designed for highly mobile systems (cars driving down a highway for example). In such systems the correct route to a particular destination would change frequently (necessitating route expiry). However, our requirements differ in that our nodes are expected to move very little, thus removing one reason for using route expiry.
Another reason for the use of route expiry is to avoid routing loops. Our system is implemented using the Ad-Hoc On-Demand Distance Vector (AODV) algorithm. AODV handles the problem of routing loops by using sequence numbers combined with the route expiration time. The route expiration time determines the startup delay which determines how long a node must remain silent upon startup. The startup delay must be at least as long as the AODV delete period. The startup delay is used to prevent routing loops in the event a node is restarted. This is necessary as the node must compensate for the loss of its sequence number and the sequence numbers of any routes it had. While our software saves the latest sequence number of the host node to non-volatile memory, it does not save the sequence number of any of the routes. As a result our software still needs to use the startup delay.
One obvious method of eliminating route expiration is to increase the route timeout to infinity. However the route timeout affects the delete period, which in turn affects the startup delay. Therefore an infinite route timeout would result in an infinite startup delay which would result in nodes that cannot ever communicate.
Our goal is to reduce the number of route discovery operations that need to be performed by eliminating route expiry, thus reducing network traffic. Ideally, once a route is discovered, the route would remain valid until attempted communications over that route fail.
An AODV variant, AODVjr, is a simplified variant of AODV wherein only the destination nodes respond to a route request (in AODV any node with a valid route can respond). This type of behavior may help to eliminate the possibility of routing loops when there is no startup delay and should be investigated.
- The algorithm provided shall not use route expiry
- The algorithm provided shall be free of routing loops
- The algorithm provided shall stay as close as possible to AODV as defined in RFC 3561
Thorough analysis and simulation shall be performed to prove the algorithm is l
Back to the top...